Devuan
As a host
Overview
Here is an overview of KVM, Qemu and Libvirt. We need both kvm (the kernel drivers for direct hardware access from within virtual machines) and qemu (the hardware simulator on which the virtual machines run). Qemu may run without kvm but the result is slow virtual machine. libvirt is a virtualization library which wraps QEMU and KVM to provide APIs for use by other programs. There are some reports that it may slow down the performance of the virtualization.
Overall enable virtualization from the BIOS, install the KVM kernel drivers, install Qemu.
apt install --no-install-recommends qemu-system-x86
Install some helper tools as well
apt install --no-install-recommends qemu-utils socat
A test virtual machine
Assuming all software is installed on the host operating system in our case Devuan one can download a ready made image and run the following:
qemu-system-x86_64
-enable-kvm
-m 2048
-smp 4
-serial mon:stdio
-nographic
-drive file=test.qcow2
-enable-kvm- use hardware acceleration
-m 2048- provide 2G of ram for the virtual machine
-smp 4- provide 4 CPU threads
-drive file=test.qcow2- where the files/drive of the guest operating system is
-display none- useful to have no display
The above machine can reach the internet but from the outside the machine is not reachable. Let us address the networking
Networking
There are multiple network topologies to allow a virtual machine to be accessible from the wider network. See Redhat.
Virtual Ethernet on a single physical device
For the problem at hand the router that connects to the wider Internet acts as a DHCP server and assigns IP address at the local level. We will give each virtual machine a virtual Ethernet interface with fixed MAC address so as to assign static local IP addresses to each virtual server.
For that we need to prepare on the host operating system suitable TUN/TAP configuration that will allow for the host kernel virtual network.
To get the network described in 17.4.3 — essentially making every virtual machine behave like a separate machine from network perspective, we need a bridge – something that makes one network device (NIC) behave as if it is multiple network devices one for each virtual machine i.e., share the one physical device with multiple virtual Ethernet devices (each machine having its own MAC).
From the command line
Step 1: create the bridge and bind a physical device to it
On Debian-wiki
Libvirt and Bridging section we look at
jamielinux
and avoid libvirt. We create a bridge using ip: from ArchWiki. The
commands below require root access (or equivalent).
ip link add name br0 type bridge- create the bridge
ip link set dev br0 up- activate the bridge
ip address add 192.168.68.103/24 dev br0ip route append default via 192.168.68.1 dev br0- add necessary routing information to the bridge (if this information is missing eth0 will use br0 to connect to the Internet but br0 has no information on how to do it)
ip link set eth0 master br0- add the actual Ethernet interface to the bridge. You can see the result with
ifconfigbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.68.111 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::7c0b:9bff:fef6:5eea prefixlen 64 scopeid 0x20<link> ether 54:e1:ad:2b:d3:2f txqueuelen 1000 (Ethernet) RX packets 48 bytes 6657 (6.5 KiB) RX errors 0 dropped 10 overruns 0 frame 0 TX packets 94 bytes 12015 (11.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163>UP,BROADCAST,RUNNING,MULTICAST< mtu 1500 inet 192.168.68.111 netmask 255.255.255.0 broadcast 192.168.68.255 inet6 fe80::56e1:adff:fe2b:d32f prefixlen 64 scopeid 0x20<link> ether 54:e1:ad:2b:d3:2f txqueuelen 1000 (Ethernet) RX packets 1401 bytes 1274190 (1.2 MiB) RX errors 0 dropped 176 overruns 0 frame 0 TX packets 756 bytes 81293 (79.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
At this stage the bridge is running and we have internet access:
Step 2: create logical devices
The following StackExchange
post has a nice overview of manipulating bridges with ip and
adding interfaces to a bridge. Following the code
scripts here is how roughly qemu creates and deletes a virtual Ethernet for the virtual machine
ip tuntap add mynet0 mode tap- create a taptun interface
ip link set dev mynet0 address B4:E1:AD:2B:D3:77- the MAC address
ip link set mynet0 up- activate the interface
# ip link set mynet0 master br0- add the interface the bridge
ip link set mynet0 nomaster- remove from the master
ip link set dev mynet0 down- take it down
ip link del mynet0- delete it
The above steps are performed automatically by Qemu (perhaps based on a
configuration script?). With the bridge the following options to
qemu-system-x86_64:
-device
e1000,netdev=mynet0,mac=B4:E1:AD:2B:D3:00 -netdev
tap,id=mynet0or the shorter version
-nic tap,mac=B4:E1:AD:2B:D3:00will tell qemu to emulate an Ethernet like device with the give mac, which from within the guest OS is seen as an Ethernet card. By negotiating DHCP with the host's router the machine is assigned its own IP address. With the fixed MAC address such IP can be made static from the router. In the LAN network
nmap -sP 192.168.68.0/24will list a separate QEMU machine with its own ip address 192.168.68.XXX. If the said guest runs a ssh server one can logon to it, under the regulations of the guest OS's ssh server configurations.
Configuration file
This file in lieu of Step 1 above we will modify
/etc/network/interfaces.d/vps
get a copy
auto eth0
iface eth0 inet manual
up ip link set $IFACE up
down ip link set $IFACE down
auto br0
iface br0 inet dhcp
pre-up ip link add br0 type bridge
pre-up ip link set eth0 master br0
post-down ip link set eth0 nomaster
post-down ip link delete br0
Step two is performed by Qemu itself. This configuration is written on a separate file for simplified maintenance.
How did we get the above file?
The file /etc/network/interfaces.d/vps represents the
Step 1 above written in configuration file.
Note that difference that on boot eth0 is not yet up and
the address we specified for the bridge is not yet known. One can
simulate that by deleting
br0 and eth0 so that ifconfig does not list them at all. With that to
get the network working we need.
ip link set dev eth0 up- activate eth0
ip link add br0 type bridge- create a bridge
ip link set eth0 master br0- make bridge master of eth0
ip link set dev br0 up- activate bridge
dhclient br0- obtain IP address from the dhcp server, see result
-
ifconfigbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.68.111 netmask 255.255.255.0 broadcast 192.168.68.255 inet6 fe80::56e1:adff:fe2b:d32f prefixlen 64 scopeid 0x20<link> ether 54:e1:ad:2b:d3:2f txqueuelen 1000 (Ethernet) RX packets 142 bytes 39808 (38.8 KiB) RX errors 0 dropped 6 overruns 0 frame 0 TX packets 133 bytes 22514 (21.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::56e1:adff:fe2b:d32f prefixlen 64 scopeid 0x20<link> ether 54:e1:ad:2b:d3:2f txqueuelen 1000 (Ethernet) RX packets 162 bytes 45184 (44.1 KiB) RX errors 0 dropped 9 overruns 0 frame 0 TX packets 161 bytes 26513 (25.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 - Note that in this case the bridge address information directly from the DHCP server. To close the connection we can reverse these steps.
dhclient -r br0- release the IP address
ip link set dev br0 down- deactivate bridge
ip link set eth0 master br0- bridge is no longer master of eth0
ip link delete br0- delete the bridge
ip link set dev eth0 down- deactivate eth0
In /etc/network/interfaces.d/vps the
auto means the step is executed on boot. First
eth0 is brought up then br0. With
pre-up we ensure that the appropriate steps are executed
before the bridge is brought up and dhclient executed.
The execution of dhclient is the iface br0 inet
dhcp part of the config file. The reverse operations are
necessary for a graceful shutdown.
Bibliography
A list of links I've gone through while learning the above
- Redhat description a description of possible network solutions
- libvirt
- Debian-handbook section 12.2.2.2
- computingforgeeks
- lengthy explanations
- local examples
- /etc/network/interfaces explanations
- read about
label ${interface_name}:${description}
Starting and stopping the virtual machine
/usr/bin/qemu-system-x86_64 -enable-kvm -m 2048 -smp 4 -drive file=/path/to/work.qcow2 -device e1000,netdev=my0,mac=B4:E1:AD:2B:D3:00 -netdev tap,id=my0 -name test,process=QemuTest -display none -daemonize -monitor unix:qemutest.sock,server,nowait/usr/bin/qemu-system-x86_64 -enable-kvm -m 2048 -smp 4 -drive file=/path/to/work.qcow2 -nic tap,mac=B4:E1:AD:2B:D3:00 -name test,process=QemuTest -display none -daemonize -monitor unix:qemutest.sock,server,nowait- Start the machine with either of the above commands
socat -,echo=0,icanon=0 unix-connect:qemutest.sock- Use socat to connect to the qemu monitor
# echo system_powerdown | socat - unix-connect:qemutest.sock- Use socat to power down the virtual machine non-interactively
- managing running qemu instances via socat
- manning qemu via socat alternative link
- qemu monitor
- via sockets
- Press
Ctrl+aand thenhto see the help for the qemu monitor - Press
Ctrl+aand thencto enter the qemu monitor interface - Qemu options at Gentoo Wiki
Starting and stopping on boot the virtual machine
At the end of the boot process (and beginning of shutdown) the init
system calls /etc/init.d/rc.local. That script in
particular executes /etc/rc.local at boot and
/etc/rc.shutdown at reboot/shutdown.
Starting at boot
The /etc/rc.local
executes the scripts in /etc/boot.d should that directory
exists. If the directory does not exist create and add
add a script 50_vm_start in it. Place a suitable script that starts your virtual
machine.
/etc/boot.d/50_vm_start.sh
get a copy
#!/bin/sh -e
# you can use /bin/df > /path/to/file.txt to see at this point which (you can
# place the command at the beginning of /etc/rc.local also to get all
# directories that are mounted. I first tried to use /run/user which turned
# out was not available and the virtual machine was not initiated. The
# directory /run is available and mounted as tmpfs so it will do the
# work.
MONITOR_SOCKET_DIR=/run
# start say sql server
/usr/bin/qemu-system-x86_64 -enable-kvm\
-m 4096 -cpu host -smp 4\
-drive file=/path/to/vm01_sql.qcow2 \
-nic tap,mac=E1:B4:AD:2B:D3:01 \
-monitor unix:$MONITOR_SOCKET_DIR/vm01_sql.sock,server,nowait \
-display none \
-serial null \
-daemonize \
-pidfile $MONITOR_SOCKET_DIR/vm01.pid \
-name vm01_sql,process=vm01_sql
# these are equivalent
# -nic tap,mac=E1:B4:AD:2B:D3:01 \
# -device e1000,netdev=v1eth,mac=E1:B4:AD:2B:D3:01 -netdev tap,id=v1eth \
#
# for debuggin you can use redirect the serial output to a file with
# -serial file:/path/to/file.txt \
# or to a socket but to use a socket the socket must first be created
# -serial unix:$MONITOR_SOCKET_DIR/serial_sql.sock \
#
# You can see the output of qemu command by redirecting its output to a file
# with
# -name vm01_sql,process=vm01_sql >> /home/bustaoglu/qmerr.txt 2>&1
# start a second virtual machine
/usr/bin/qemu-system-x86_64 -enable-kvm\
-m 8192 -cpu host -smp 4\
-drive file=/path/to/vm02_www.qcow2 \
-nic tap,mac=E1:B4:AD:2B:D3:02 \
-monitor unix:$MONITOR_SOCKET_DIR/vm02_www.sock,server,nowait \
-display none \
-serial null \
-daemonize \
-pidfile $MONITOR_SOCKET_DIR/vm02.pid \
-name vm02_www,process=vm02_www
exit 0
You can check with run-parts --test
/etc/boot.d if your script will be executed. For example if a
script name contains "." in its name the script is not run. You can
place any other scripts that you would like to run at boot in
/etc/boot.d.
Overview of the Qemu options in /etc/boot.d/50_vm_start
You should change those to your preference and needs.
/usr/bin/qemu-system-x86_64- the command
-name vm01,process=vm01- the name of the virtual machine
-drive file=/path/to/vm01- the virtual machine drive
-enable-kvm- enable hardware virtualization
-m 4096- 4G of memory
-cpu host- simulate the type of CPU available on the host
-smp 4- 4 cpu cores
-nic tap,mac=b4:e1:ad:2b:d3:00- enable virtual ethernet card with the specified mac address
-display none- run qemu without a display
-monitor unix:/run/vm01.sock,server,nowait- qemu-monitor on that sock
- we will use the sock to send
system_powerdownto cleanly stop the virtual machine -pidfile /run/vm01.pid- keep the pid of the process for later shutdown
- make the sock and the pid file in a place with
tmpfsfilesystem that is not world accessible. Let us prevent other from powering down the virtual machines -daemonize- run qemu in the background (this option conflicts with
-nographicso-nographicis not used
Stopping at shutdown/reboot
Simply plugging off a running machine may result in various
issue, hence it is better to properly power down the guest before
turning off the host. Similar to /etc/rc.local, the script
/etc/rc.shutdown executes the scripts in
/etc/shutdown.d. Create the director and add
/etc/shutdown.d/50_vm_stop in that directory.
/etc/shutdown.d/50_vm_stop
get a copy
#!/bin/sh -e
MONITOR_SOCKET_DIR=/run
COUNTER=0
while [ -f $MONITOR_SOCKET_DIR/vm01.pid ] || [ -f $MONITOR_SOCKET_DIR/vm02.pid ] ; do
if [ -S $MONITOR_SOCKET_DIR/vm01_sql.sock ]; then
echo system_powerdown | /usr/bin/socat - unix-connect:$MONITOR_SOCKET_DIR/vm01_sql.sock
fi
if [ -S $MONITOR_SOCKET_DIR/vm02_www.sock ]; then
echo system_powerdown | /usr/bin/socat - unix-connect:$MONITOR_SOCKET_DIR/vm02_www.sock
fi
sleep 2
COUNTER=$(($COUNTER+1))
if [ "$COUNTER" -gt 20 ]; then
# this is to keep a note that a virtual machine did not
# shutdown properly
echo "Error" >>/path/to/file.txt
exit 1
fi
done
exit 0
The above script closes only one virtual machine, make necessary changes to power down all your virtual machines in the correct order (e.g., you should probably power down your http server before powering down your database server). There is probably a better way to write that shutdown script but for now it should do!
As a guest
This section is under construction.