VMware’s acquisition by Broadcom introduced a number of issues for individuals and enterprises that used VMware ESX and Desktop products. One of the most significant issues is that Broadcom killed the free ESXi version many people use in home lab environments. Licensing fees skyrocketed, and many smaller organizations were not able to renew expiring support contracts at reasonable prices. Broadcom’s hostile actions toward existing VMware customers prompted a search for alternatives, and there are many.

Vmware workstation alternatives include VirtualBox, Parallels Desktop, and UTM. ESXi alternatives include MS Hyper-V, KVM, oVirt, and Proxmox.

In this article, we will discuss KVM virtualization, which involves quickly getting up and running using open-source software, accessing advanced features, and having fun.

Lab environment Link to heading

  • Laptop with 12th Gen Intel(R) Core(TM) i7-1260P, 16G RAM, and 500G SSD
  • Debian GNU/Linux 12 (bookworm)
  • CPU virtualization support (VT-x/AMD-V) enabled in BIOS

Install KVM checker, and run kvm-ok Link to heading

$ sudo apt install cpu-checker -y
$ sudo kvm-ok

Command above should return:

INFO: /dev/kvm exists
KVM acceleration can be used

Install required KVM/Libvrt packages Link to heading

sudo apt-get install python3-pip genisoimage python3.11-venv libvirt-clients libvirt-daemon-system \
              qemu-kvm bridge-utils dnsmasq image-factory libvirt-dev pkg-config -y

Add your user account( for example bsmith) to the libvrt group Link to heading

sudo usermod -a -G libvirt bsmith

Modify /etc/libvirt/libvirt.conf Link to heading

vi /etc/libvirt/libvirt.conf

uncomment default URI

uri_default = "qemu:///system"

Enable/Start and check status using systemd Link to heading

sudo systemctl enable  libvirtd
sudo systemctl start  libvirtd
systemctl status  libvirtd

● libvirtd.service - Virtualization daemon
     Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; preset: enabled)
     Active: active (running) since Sun 2024-07-14 21:58:42 EDT; 24h ago
TriggeredBy: ● libvirtd-admin.socket
             ● libvirtd.socket
             ● libvirtd-ro.socket
       Docs: man:libvirtd(8)
             https://libvirt.org
   Main PID: 910 (libvirtd)
      Tasks: 21 (limit: 32768)
     Memory: 95.7M
        CPU: 9.113s
     CGroup: /system.slice/libvirtd.service
             └─910 /usr/sbin/libvirtd --timeout 120

Create network bridge with primary system interface Link to heading

- Modify interfaces file to include br0

cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo br0
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet manual
iface br0 inet dhcp
bridge_ports eth0


- Bring bridge interface up

# ifup br0


- create a file called kvm-hostbridge.xml

<network>
  <name>hostbridge</name>
  <forward mode="bridge"/>
  <bridge name="br0"/>
</network>

# virsh net-define kvm-hostbridge.xml
# virsh net-start hostbridge
# virsh net-autostart hostbridge

Deploy first VM Link to heading

mkdir -p /var/vms/images
cd /var/vms/images
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
  • Create directory for the vm
mkdir -p /var/vms/node1
  • Copy image file to the vm directory
cp /var/vms/images/noble-server-cloudimg-amd64.img /var/vms/node1/
  • Define cloud-init files We need to create an ISO image with two YAML files: meta-data and user-data Cloud-init examples https://cloudinit.readthedocs.io/en/latest/reference/examples.html
    
    root@lab:/var/vms/node1# ls -l
    total 571404
    -rw-r--r-- 1 root root        47 Jul 16 20:45 meta-data
    -rw-r--r-- 1 root root 585105408 Jul 16 20:40 noble-server-cloudimg-amd64.img
    -rw-r--r-- 1 root root       239 Jul 16 20:56 user-data
    
    
    root@lab:/var/vms/node1# cat meta-data
    iinstance-id: id-local01
    local-hostname: node1
    root@lab:/var/vms/node1#
    
    root@lab:/var/vms/node1# cat user-data
    #cloud-config
    users:
    ​  - default
    ​  - name: afler
        gecos: Alex Fler
        sudo: ALL=(ALL) NOPASSWD:ALL
        groups: users, admin
        lock_passwd: false
        passwd: $y$j9T$37.zXA.XXXXXXXXX
        ssh_authorized_keys:
    ​      - ssh-ed25519 XXXXXXXZZZZZZZZft1SYj+3fpeb7EJmc1T8KIfqZu5heJBxPFbCCKi9 root@lab
    
    root@lab:/var/vms/node1#
    

Create init iso Link to heading

root@lab:/var/vms/node1# pwd
/var/vms/node1

genisoimage -output node1.iso -volid cidata -joliet -rock user-data meta-data
I: -input-charset not specified, using utf-8 (detected in locale settings)
Total translation table size: 0
Total rockridge attributes bytes: 331
Total directory bytes: 0
Path table size(bytes): 10
Max brk space used 0
183 extents written (0 MB)
root@lab:/var/vms/node1#


ls -lhtr

total 559M
-rw-r--r-- 1 root root 558M Jul 16 20:40 noble-server-cloudimg-amd64.img
-rw-r--r-- 1 root root   47 Jul 16 20:45 meta-data
-rw-r--r-- 1 root root  239 Jul 16 20:56 user-data
-rw-r--r-- 1 root root 366K Jul 16 21:01 node1.iso

Deploy node1 Link to heading

virt-install --name node1 --memory 2048 \
--vcpus 4  --disk=/var/vms/node1/noble-server-cloudimg-amd64.img \
--disk=/var/vms/node1/node1.iso --os-variant ubuntu-lts-latest \
--virt-type kvm --graphics none \
--network network=hostbridge,model=virtio \
--noautoconsole \
--import

Starting install...
Creating domain...                                                                                                                                 |    0 B  00:00:00
Domain creation completed.
root@lab:/var/vms/node1#

Connect to the console, verify passwd, sudoers works as expected Link to heading

root@lab:/var/vms/node1# virsh console node1
Connected to domain 'node1'
Escape character is ^] (Ctrl + ])

node1 login: afler
Password:
Welcome to Ubuntu 24.04 LTS (GNU/Linux 6.8.0-36-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Wed Jul 17 01:26:33 UTC 2024

  System load:  0.0               Processes:               170
  Usage of /:   61.5% of 2.35GB   Users logged in:         0
  Memory usage: 10%               IPv4 address for enp1s0: 192.168.99.64
  Swap usage:   0%


Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status


$ sudo su -
root@node1:~#

Verify SSH connectivity and key Link to heading

root@lab:/var/vms/node1# ssh afler@192.168.99.64
Welcome to Ubuntu 24.04 LTS (GNU/Linux 6.8.0-36-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Wed Jul 17 01:29:44 UTC 2024

  System load:  0.0               Processes:               157
  Usage of /:   61.5% of 2.35GB   Users logged in:         0
  Memory usage: 10%               IPv4 address for enp1s0: 192.168.99.64
  Swap usage:   0%


Expanded Security Maintenance for Applications is not enabled.

0 updates can be applied immediately.

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status


Last login: Wed Jul 17 01:22:57 2024 from 192.168.99.1
$

Putting everything together Link to heading

  • On the hypervisor server install ansible using Python virtual environment

    root@lab:~# python3 -m venv venv
    root@lab:~# source venv/bin/activate
    (venv) root@lab:~# which pip
    /root/venv/bin/pip
    (venv) root@lab:~# pip install ansible; pip install libvirt-python; pip install lxml
    (venv) root@lab:~/kvm-manager# ansible-galaxy collection install community.libvirt
    
  • Clone automation repo

    git clone https://github.com/FlerAlex/kvm-manager
    
  • Update inventory file

      (venv) root@lab:~/kvm-manager# cat inventory
    node1 memory=2048 vcpus=2 disk=/var/vms/node1/node1-noble-server-cloudimg-amd64.img iso=/var/vms/node1/node1.iso network=hostbridge os=ubuntu-lts-latest
    node2 memory=2048 vcpus=2 disk=/var/vms/node2/node2-noble-server-cloudimg-amd64.img iso=/var/vms/node2/node2.iso network=hostbridge os=ubuntu-lts-latest
    node3 memory=2048 vcpus=2 disk=/var/vms/node3/node3-noble-server-cloudimg-amd64.img iso=/var/vms/node3/node3.iso network=hostbridge os=ubuntu-lts-latest
    node4 memory=2048 vcpus=2 disk=/var/vms/node4/node4-noble-server-cloudimg-amd64.img iso=/var/vms/node4/node4.iso network=hostbridge os=ubuntu-lts-latest
    node5 memory=2048 vcpus=2 disk=/var/vms/node5/node5-noble-server-cloudimg-amd64.img iso=/var/vms/node5/node5.iso network=hostbridge os=ubuntu-lts-latest
    
  • Build

ansible-playbook playbook.yaml -i inventory --tags  deploy
  • Cleanup
ansible-playbook playbook.yaml -i inventory --tags  cleanup