Vicktricks

Creating Virtual Machines

Goal description

I what to create a virtual machine with minimal dependencies. In particular:

  • command line creation (no graphical tools)
  • avoid ready images instead perform installation on your own
  • get a virtual machine that is a PXE boot server

There are two alternatives: Xen and KVM. Xen runs directly on the hardware. KVM requires Linux as a host operating system. There are alternatives that run on FreeBSD but due to familiarity we stick to Linux.

The virtualization we use has a host operating system and guest operating system. The guest operating systems are the virtual hosts that run on virtual machines.

Devuan

As a host

Overview

Here is an overview of KVM, Qemu and Libvirt. We need both kvm (the kernel drivers for direct hardware access from within virtual machines) and qemu (the hardware simulator on which the virtual machines run). Qemu may run without kvm but the result is slow virtual machine. libvirt is a virtualization library which wraps QEMU and KVM to provide APIs for use by other programs. There are some reports that it may slow down the performance of the virtualization.

Overall enable virtualization from the BIOS, install the KVM kernel drivers, install Qemu.

apt install --no-install-recommends qemu-system-x86

Install some helper tools as well

apt install --no-install-recommends qemu-utils socat

A test virtual machine

Assuming all software is installed on the host operating system in our case Devuan one can download a ready made image and run the following:

qemu-system-x86_64 -enable-kvm -m 2048 -smp 4 -serial mon:stdio -nographic -drive file=test.qcow2
-enable-kvm
use hardware acceleration
-m 2048
provide 2G of ram for the virtual machine
-smp 4
provide 4 CPU threads
-drive file=test.qcow2
where the files/drive of the guest operating system is
-display none
useful to have no display

The above machine can reach the internet but from the outside the machine is not reachable. Let us address the networking

Networking

There are multiple network topologies to allow a virtual machine to be accessible from the wider network. See Redhat.

Virtual Ethernet on a single physical device

For the problem at hand the router that connects to the wider Internet acts as a DHCP server and assigns IP address at the local level. We will give each virtual machine a virtual Ethernet interface with fixed MAC address so as to assign static local IP addresses to each virtual server.

For that we need to prepare on the host operating system suitable TUN/TAP configuration that will allow for the host kernel virtual network.

To get the network described in 17.4.3 — essentially making every virtual machine behave like a separate machine from network perspective, we need a bridge – something that makes one network device (NIC) behave as if it is multiple network devices one for each virtual machine i.e., share the one physical device with multiple virtual Ethernet devices (each machine having its own MAC).

From the command line
Step 1: create the bridge and bind a physical device to it

On Debian-wiki Libvirt and Bridging section we look at jamielinux and avoid libvirt. We create a bridge using ip: from ArchWiki. The commands below require root access (or equivalent).

ip link add name br0 type bridge
create the bridge
ip link set dev br0 up
activate the bridge
ip address add 192.168.68.103/24 dev br0
ip route append default via 192.168.68.1 dev br0
add necessary routing information to the bridge (if this information is missing eth0 will use br0 to connect to the Internet but br0 has no information on how to do it)
ip link set eth0 master br0
add the actual Ethernet interface to the bridge. You can see the result with
ifconfig

br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.68.111  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::7c0b:9bff:fef6:5eea  prefixlen 64  scopeid 0x20<link>
        ether 54:e1:ad:2b:d3:2f  txqueuelen 1000  (Ethernet)
        RX packets 48  bytes 6657 (6.5 KiB)
        RX errors 0  dropped 10  overruns 0  frame 0
        TX packets 94  bytes 12015 (11.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163>UP,BROADCAST,RUNNING,MULTICAST<  mtu 1500
        inet 192.168.68.111  netmask 255.255.255.0  broadcast 192.168.68.255
        inet6 fe80::56e1:adff:fe2b:d32f  prefixlen 64  scopeid 0x20<link>
        ether 54:e1:ad:2b:d3:2f  txqueuelen 1000  (Ethernet)
        RX packets 1401  bytes 1274190 (1.2 MiB)
        RX errors 0  dropped 176  overruns 0  frame 0
        TX packets 756  bytes 81293 (79.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
	

At this stage the bridge is running and we have internet access:

Step 2: create logical devices

The following StackExchange post has a nice overview of manipulating bridges with ip and adding interfaces to a bridge. Following the code scripts here is how roughly qemu creates and deletes a virtual Ethernet for the virtual machine

ip tuntap add mynet0 mode tap
create a taptun interface
ip link set dev mynet0 address B4:E1:AD:2B:D3:77
the MAC address
ip link set mynet0 up
activate the interface
# ip link set mynet0 master br0
add the interface the bridge
ip link set mynet0 nomaster
remove from the master
ip link set dev mynet0 down
take it down
ip link del mynet0
delete it

The above steps are performed automatically by Qemu (perhaps based on a configuration script?). With the bridge the following options to qemu-system-x86_64:

-device e1000,netdev=mynet0,mac=B4:E1:AD:2B:D3:00 -netdev tap,id=mynet0

or the shorter version

-nic tap,mac=B4:E1:AD:2B:D3:00

will tell qemu to emulate an Ethernet like device with the give mac, which from within the guest OS is seen as an Ethernet card. By negotiating DHCP with the host's router the machine is assigned its own IP address. With the fixed MAC address such IP can be made static from the router. In the LAN network

nmap -sP 192.168.68.0/24

will list a separate QEMU machine with its own ip address 192.168.68.XXX. If the said guest runs a ssh server one can logon to it, under the regulations of the guest OS's ssh server configurations.

Configuration file

This file in lieu of Step 1 above we will modify

/etc/network/interfaces.d/vps get a copy
	auto eth0
	iface eth0 inet manual
		up ip link set $IFACE up
		down ip link set $IFACE down
	auto br0
	iface br0 inet dhcp
		pre-up ip link add br0 type bridge
		pre-up ip link set eth0 master br0
		post-down ip link set eth0 nomaster
		post-down ip link delete br0

Step two is performed by Qemu itself. This configuration is written on a separate file for simplified maintenance.

How did we get the above file?

The file /etc/network/interfaces.d/vps represents the Step 1 above written in configuration file. Note that difference that on boot eth0 is not yet up and the address we specified for the bridge is not yet known. One can simulate that by deleting br0 and eth0 so that ifconfig does not list them at all. With that to get the network working we need.

ip link set dev eth0 up
activate eth0
ip link add br0 type bridge
create a bridge
ip link set eth0 master br0
make bridge master of eth0
ip link set dev br0 up
activate bridge
dhclient br0
obtain IP address from the dhcp server, see result
ifconfig

		br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
			inet 192.168.68.111  netmask 255.255.255.0  broadcast 192.168.68.255
			inet6 fe80::56e1:adff:fe2b:d32f  prefixlen 64  scopeid
			0x20<link>
			ether 54:e1:ad:2b:d3:2f  txqueuelen 1000  (Ethernet)
			RX packets 142  bytes 39808 (38.8 KiB)
			RX errors 0  dropped 6  overruns 0  frame 0
			TX packets 133  bytes 22514 (21.9 KiB)
			TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
		eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
			inet6 fe80::56e1:adff:fe2b:d32f  prefixlen 64  scopeid 0x20<link>
			ether 54:e1:ad:2b:d3:2f  txqueuelen 1000  (Ethernet)
			RX packets 162  bytes 45184 (44.1 KiB)
			RX errors 0  dropped 9  overruns 0  frame 0
			TX packets 161  bytes 26513 (25.8 KiB)
			TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
		
Note that in this case the bridge address information directly from the DHCP server. To close the connection we can reverse these steps.
dhclient -r br0
release the IP address
ip link set dev br0 down
deactivate bridge
ip link set eth0 master br0
bridge is no longer master of eth0
ip link delete br0
delete the bridge
ip link set dev eth0 down
deactivate eth0

In /etc/network/interfaces.d/vps the auto means the step is executed on boot. First eth0 is brought up then br0. With pre-up we ensure that the appropriate steps are executed before the bridge is brought up and dhclient executed. The execution of dhclient is the iface br0 inet dhcp part of the config file. The reverse operations are necessary for a graceful shutdown.

Bibliography

A list of links I've gone through while learning the above

Redhat description a description of possible network solutions
libvirt
Debian-handbook section 12.2.2.2
computingforgeeks
lengthy explanations
local examples
/etc/network/interfaces explanations
read about label ${interface_name}:${description}

Starting and stopping the virtual machine

/usr/bin/qemu-system-x86_64 -enable-kvm -m 2048 -smp 4 -drive file=/path/to/work.qcow2 -device e1000,netdev=my0,mac=B4:E1:AD:2B:D3:00 -netdev tap,id=my0 -name test,process=QemuTest -display none -daemonize -monitor unix:qemutest.sock,server,nowait /usr/bin/qemu-system-x86_64 -enable-kvm -m 2048 -smp 4 -drive file=/path/to/work.qcow2 -nic tap,mac=B4:E1:AD:2B:D3:00 -name test,process=QemuTest -display none -daemonize -monitor unix:qemutest.sock,server,nowait
Start the machine with either of the above commands
socat -,echo=0,icanon=0 unix-connect:qemutest.sock
Use socat to connect to the qemu monitor
# echo system_powerdown | socat - unix-connect:qemutest.sock
Use socat to power down the virtual machine non-interactively

Starting and stopping on boot the virtual machine

At the end of the boot process (and beginning of shutdown) the init system calls /etc/init.d/rc.local. That script in particular executes /etc/rc.local at boot and /etc/rc.shutdown at reboot/shutdown.

Starting at boot

The /etc/rc.local executes the scripts in /etc/boot.d should that directory exists. If the directory does not exist create and add add a script 50_vm_start in it. Place a suitable script that starts your virtual machine.

/etc/boot.d/50_vm_start.sh get a copy
#!/bin/sh -e


# you can use /bin/df > /path/to/file.txt to see at this point which (you can
# place the command at the beginning of /etc/rc.local also to get all
# directories that are mounted. I first tried to use /run/user which turned
# out was not available and the virtual machine was not initiated. The
# directory /run is available and mounted as tmpfs so it will do the
# work.


MONITOR_SOCKET_DIR=/run

# start say sql server
/usr/bin/qemu-system-x86_64 -enable-kvm\
	-m 4096 -cpu host -smp 4\
	-drive file=/path/to/vm01_sql.qcow2 \
	-nic tap,mac=E1:B4:AD:2B:D3:01 \
	-monitor unix:$MONITOR_SOCKET_DIR/vm01_sql.sock,server,nowait \
	-display none \
	-serial null \
	-daemonize \
	-pidfile $MONITOR_SOCKET_DIR/vm01.pid \
	-name vm01_sql,process=vm01_sql

# these are equivalent
# 	-nic tap,mac=E1:B4:AD:2B:D3:01 \
#	-device e1000,netdev=v1eth,mac=E1:B4:AD:2B:D3:01  -netdev tap,id=v1eth \
# 
# for debuggin you can use redirect the serial output to a file with
#	-serial file:/path/to/file.txt \
# or to a socket but to use a socket the socket must first be created
#	-serial unix:$MONITOR_SOCKET_DIR/serial_sql.sock \
#
# You can see the output of qemu command by redirecting its output to a file
# with
#	-name vm01_sql,process=vm01_sql >> /home/bustaoglu/qmerr.txt 2>&1


# start a second virtual machine
/usr/bin/qemu-system-x86_64 -enable-kvm\
	-m 8192 -cpu host -smp 4\
	-drive file=/path/to/vm02_www.qcow2 \
	-nic tap,mac=E1:B4:AD:2B:D3:02  \
	-monitor unix:$MONITOR_SOCKET_DIR/vm02_www.sock,server,nowait \
	-display none \
	-serial null \
	-daemonize \
	-pidfile $MONITOR_SOCKET_DIR/vm02.pid \
	-name vm02_www,process=vm02_www

exit 0

You can check with run-parts --test /etc/boot.d if your script will be executed. For example if a script name contains "." in its name the script is not run. You can place any other scripts that you would like to run at boot in /etc/boot.d.

Overview of the Qemu options in /etc/boot.d/50_vm_start

You should change those to your preference and needs.

/usr/bin/qemu-system-x86_64
the command
-name vm01,process=vm01
the name of the virtual machine
-drive file=/path/to/vm01
the virtual machine drive
-enable-kvm
enable hardware virtualization
-m 4096
4G of memory
-cpu host
simulate the type of CPU available on the host
-smp 4
4 cpu cores
-nic tap,mac=b4:e1:ad:2b:d3:00
enable virtual ethernet card with the specified mac address
-display none
run qemu without a display
-monitor unix:/run/vm01.sock,server,nowait
qemu-monitor on that sock
we will use the sock to send system_powerdown to cleanly stop the virtual machine
-pidfile /run/vm01.pid
keep the pid of the process for later shutdown
make the sock and the pid file in a place with tmpfs filesystem that is not world accessible. Let us prevent other from powering down the virtual machines
-daemonize
run qemu in the background (this option conflicts with -nographic so -nographic is not used
Stopping at shutdown/reboot

Simply plugging off a running machine may result in various issue, hence it is better to properly power down the guest before turning off the host. Similar to /etc/rc.local, the script /etc/rc.shutdown executes the scripts in /etc/shutdown.d. Create the director and add /etc/shutdown.d/50_vm_stop in that directory.

/etc/shutdown.d/50_vm_stop get a copy
#!/bin/sh -e

MONITOR_SOCKET_DIR=/run
COUNTER=0

while  [ -f  $MONITOR_SOCKET_DIR/vm01.pid ]  || [ -f  $MONITOR_SOCKET_DIR/vm02.pid ] ; do
        if [ -S $MONITOR_SOCKET_DIR/vm01_sql.sock ]; then
                echo system_powerdown | /usr/bin/socat - unix-connect:$MONITOR_SOCKET_DIR/vm01_sql.sock 
        fi
        if [ -S $MONITOR_SOCKET_DIR/vm02_www.sock ]; then
                echo system_powerdown | /usr/bin/socat - unix-connect:$MONITOR_SOCKET_DIR/vm02_www.sock
        fi
	sleep 2
	COUNTER=$(($COUNTER+1))
	if [ "$COUNTER" -gt 20 ]; then
		# this is to keep a note that a virtual machine did not 
		# shutdown properly
		echo "Error" >>/path/to/file.txt
		exit 1
	fi
done
exit 0

The above script closes only one virtual machine, make necessary changes to power down all your virtual machines in the correct order (e.g., you should probably power down your http server before powering down your database server). There is probably a better way to write that shutdown script but for now it should do!

As a guest

This section is under construction.

  1. Debian instructions
  2. Debootsrtap
  3. Qemu documentation
  4. Alernative debootstrap
  5. Devuan debootstrap

FreeBSD

As a host

This section is not ready! The algernative is described in Virtualization with bhyve in the FreeBSD manual. Qemu does not work with bhyve but with NetBsd nvmm qemu should work.

Last updated:

As a guest

Test a ready made image

Get a machine
Get a ready made virtual machine image
Run the machine
following change the bios part
qemu-system-x86_64 -m 4096 -smp 4 -serial mon:stdio -nographic -drive file=FreeBSD-14.0-RELEASE-amd64.qcow2 -enable-kvm
Close the machine from within freebsd
poweroff

Prepare a ready made image

Download say FreeBSD-15.0-RELEASE-amd64-BASIC-CLOUDINIT-ufs.qcow2.xz
Get a machine
cp FreeBSD-15.0-RELEASE-amd64-BASIC-CLOUDINIT-ufs.qcow2.xz vm00.qcow2.xz
Make a copy of the file to avoid future downloads
uzxz vm00.qcow2.zx
Expand the image
qemu-img resize vm00.qcow2 +4G
Resize the hard disk of the work copy to a desired size in our case increase by 4G
 /usr/bin/qemu-system-x86_64 \
	-enable-kvm -m 8192  -cpu host -smp 4 \
	-drive file=./vm00.qcow2 \
	-monitor unix:/run/vm00.sock,server,nowait \
	-nic tap,mac=b4:e1:ad:2b:d3:00 \
	-serial mon:stdio  -nographic \
	-pidfile /run/user/1001/vm00.pid \
	-name vm00,process=vm00

run the machine with access to serial console. For this FreeBSD image some scripts will be run for you on the virtual machine. Subsequently the virtual machine will reboot and you will be greeted with a login for a root user. Just type root to access FreeBSD are root. From within the root console of the virtual machine get some things ready, such as remote access

passwd
set a root password and reboot the virtual machine
vi /etc/rc.config
set the hostname to a meaningful name
vi /etc/ssh/sshd_config
modify sshd to allow root login
reboot
after setting up a root password reboot the machine. Send your public key file to the guest with ssh-copy-id root@192.168.1.XXX or ssh-copy-id root@guest_hostname where guest_hostname resolves to the ip address of the guest. You should be able to log in the guest with ssh root@192.168.1.XXX or ssh root@guest_hostname. Log in with your new root password either from the serial console or remotely.
vi /etc/ssh/sshd_config
modify sshd to allow root login but only with public key authentication.
service sshd restart
make sure the sshd daemon uses the new configuration - there are alternative ways to do it of course
perform any tasks you want on your virtual machine that runs FreeBSD. One good idea would be to add a new user with wheel privileges instead of using root login. Install any software that you need
pkg update
It is a good idea to add the various ports for easier installation of software. Be where of pkg update issue
poweroff
power down your virtual machine if not in use

Expand disk after installation

After creating the guest virtual machine at one point your drive may be full in which case you may need to increase the drive space (Ram and the like are changes to qemu-system-x86_64) options. From the host do

echo system_powerdown | socat - unix-connect:/path/to/work.sock
stop the virtual machine
qemu-img resize work.qcow2 +4G
expand the drive with the desired amount
qemu-system-x86_64 -drive file=work.qcow2
start the virtual machine. Include all options that you regularly use
ssh root@work
access the root console in the guest
gpart show
see the names of the dist as viewed by gpart. In my case ada0
gpart recover ada0
if there is a corruption fix it (possible after expanding the disk with qemu-img
gpart resize -i 5 ada0
here -i 5 points to (the last) partition of the disk where the root partition is stored
df -h
Check where root partition is mounted on /dev mine was different from /dev/ada0p5. It was on /dev/gpt/rootfs
growfs /dev/gpt/rootfs
grow the partition (this is assuming you used ufs type filesystem)
reboot
reboot and log on again to verify the changes
Last updated:

Basic database management with PostgeSQL

PostgreSQL on FreeBSD

Here the operating system runs on a virtual machine. Access its terminal as root

Installation

pkg update
update to the latest package management in FreeBSD
pkg search postgresql
find the latest versions of PostgreSQL available for you
pkg install postgresql18-server postgresql18-client
make sure you have enough space and install the software. postgresql18-client gives you psql command that lest you manage the databases from within the virtual machine.
vi /etc/rc.config
add postgresql_enabled="yes" to start the database on boot

Configuration for local access

Edit /var/db/postgres/data18/postgresql.conf (this is for PostgeSQL v18) and set


	listen_address = 'localhost,192.168.1.88'
	password_encryption = scram-sha-256

where 192.168.1.88 is the IP address of the machine that host PostgeSQL. Make sure the passwords are stored in the specified format before setting passwords of any users.

Next edit /var/db/postgres/data18/pg_hba.conf and set


	host    all             all             127.0.0.1/32            trust
	host    all             all             192.168.68.0/24 scram-sha-256
	

Accessing psql interactive terminal

su postgres
as root to access postgres user
# psql
Now you have terminal access to postgresql. You will be greeted by   terminal
# psql -U username -h host dbname
login a particular user to manage the user's databases you will be greeted with   terminal
\l
lists all databases
SELECT * from pg_users
lists all users
\du
lists all users and their privileges
CREATE USER username WITH ENCRYPTED PASSWORD 'strongpassword';
create a user with the specified password
DROP USER username;
deletes the user username
CREATE DATABASE dbname WITH OWNER=user;
create a database
DROP DATABASE dbname;
delete database
CREATE TABLE tablename (field TYPE, ..., field TYPE);
when login as user username create table with the necessary specifications in the database the user owns
\dt
list all tables in the current database
\d+ tablename
list all columns and their types available in tablename

Export databases

login as postgress user su postgres

# pg_dump -U username -h host dbname > filename.sql

export command. You may need to edit pg_hpa file to allow trust from the unix socket that pg_dump uses by setting

local all all trust

in /var/db/postgres/data18/pg_hba.conf (the Address information is empty) so as to avoid any authorization errors

Import databases

login as postgress user su postgres

CREATE USER username WITH ENCRYPTED PASSWORD 'strongpassword';
create the user if necessary, note the single quotes
CREATE DATABASE dbname WITH OWNER=user;
create the database with the owner
exit
drop to bash
# psql -U username -d dbname -f /path/to/backup.sql