[TUTORIAL] GPU Passthrough on Proxmox VE - OpenBSD 7.3 (Part. 03x04)

asded

Member
Sep 2, 2022
30
63
23
France
asded.fr
This is the fourth in a series of five articles covering the installation and configuration of VMs (Linux, Windows, macOS and BSD) in PCI Passthrough on Proxmox VE 8. I recommend you read my previous article on Installing OpenBSD 7.3 on Proxmox (BIOS/UEFI) where I go into more detail on different methods of installing OpenBSD, and give a brief overview of its compatibility with Cloud-init.

- Part 0-4 PCI/GPU Passthrough on Proxmox VE Installation and configuration (Part. 00x04)
- Part 1-4 PCI/GPU Passthrough on Proxmox VE: Windows 10,11 (Part. 01x04)
- Part 2-4 PCI/GPU Passthrough on Proxmox VE: Debian 12 (Part. 02x04)
- Part 3-4 PCI/GPU Passthrough on Proxmox VE: OpenBSD 7.3 (Part. 03x04)
- Part 4-4 PCI/GPU Passthrough on Proxmox VE: macOS Montrey (Part. 04x04)

In this article, we'll explore together how to easily create and deploy a VM with a full desktop (under Mate), using an unofficial OpenBSD image adapted to Cloud-init, while benefiting from good graphics acceleration thank of GPU Passthrough.

- Original article (FR) on my website
- YouTube video

Creating our OpenBSD template​


Code:
 qm create 112
    --name obsd-ci
    --agent 1,type=isa \
    --memory 4096
    --bios seabios
    --boot order='scsi0' \
    --sockets 1 --cores 4 \
    --net0 virtio,bridge=vmbr0 \
    --sshkey /root/.ssh/id_rsa.pub \
    --ide0 local-lvm:cloudinit \
    --scsihw virtio-scsi-pci \
    --boot order='scsi0' \
    --serial0 socket
    --vga serial0
    --usb0 046d:c08b,usb3=1 \
    --usb1 04ca:007d,usb3=1

Explanations:

- sshkey /root/.ssh/id\_rsa.pub : We import our SSH key, which will give us access to the VM (we'll see why this access is important).
- serial0 socket & vga serial0: set the default display on the serial console output (default configuration for cloud-init).
- boot order='scsi0': This defines the boot order on the OpenBSD cloud image, which we will then import into our configuration.
- ide0 local-lvm:cloudinit : Cloud-init disk location.
- usb0 and usb0: correspond to the USB id's of my keyboard and mouse, which I retrieved from the hypervisor with "lsusb".

I import the disk image from bsd-cloud-image:

Code:
wget -P /mnt/pve/PVE1/template/iso https://object-storage.public.mtl1.vexxhost.net/swift/v1/1dbafeefbd4f4c80864414a441e72dd2/bsd-cloud-image.org/images/openbsd/7.3/2023-04-22/ufs/openbsd-7.3-2023-04-22.qcow2

and set the image on the QEMU configuration:

Code:
qm set 112 --scsi0 local-lvm:0,import-from=/mnt/pve/PVE1/template/iso/openbsd-7.3-2023-04-22.qcow2

then resize the image:

Code:
qm resize 112 scsi0 +20G

Cloud-init configuration​


it's important to have several points in mind before going any further with this image:

- To be consistent with Cloud-Init, sudo is enabled by default for the user 'openbsd' (doas is still available though).
- When generating a new user with Cloud-init, this will only work partially: the user will be created, but the hash of his password (in SHA-256) will not be correctly interpreted by OpenBSD, which has been using a different hash algorithm for some versions now, namely bcrypt. That's why we'll just use the default user 'openbsd'.

- On the other hand, if we change the configuration on the cloud-init disk, even after re-generating it, OpenBSD will not automatically update the new configuration.
- Also, at the moment, there doesn't seem to be a solution yet for managing audio output from HDMI, so I'm not burdening my configuration with additional packages like PulseAudio, pavucontrol, ....

- Finally, we won't be able to add the graphics card to our template. Without the right drivers, the VM will fail to start.


Well, the aim now is to create a Cloud-init configuration file, with these elements in mind. A few explanations are in order:

- We're not going to define any new users; we'll keep the default openbsd.
- We move on to defining the timezone and keyboard layout in azerty.
- We add the AMD drivers for the graphics card (it will be added to the configuration later).
- Install a few useful packages, the qemu-agent and the Mate desktop environment.
- Adapt the QEMU agent's behavior for the ISA serial port.
- As this OpenBSD image doesn't have a display server, we'll need to install it. However, due to the way OpenBSD is designed, we can't do this via a simple pkg\_add, so we're going to import the xserv73 archive from the official OpenBSD repositories.
- We configure the latest services and configuration files for X11 and Mate.


Code:
cat << EOF >> /mnt/pve/PVE1/snippets/ci-obsd73-xbase-gpu.yaml
#cloud-config


runcmd:
  - ln -sf /usr/share/zoneinfo/Europe/Paris /etc/localtime
  - echo "keyboard.encoding=en" >> /etc/wsconsctl.conf
  - fw_update amdgpu
  - pkg_add -U qemu-ga avahi
  - pkg_add -U mate-desktop mate-notification-daemon mate-terminal mate-panel mate-session-manager mate-icon-theme mate-control-center gnome-colors-icon-theme mate-calc caja caja-extensions pluma atril eom gnome-system-monitor mate-utils mate-media engrampa dconf-editor ghostscript-10.01.1
  - echo "pkg_scripts=qemu_ga" >> /etc/rc.conf.local
  - echo 'qemu_ga_flags="-t /var/run/qemu-ga -m isa-serial -p /dev/cua01 -f /var/run/qemu-ga/qemu-ga.pid"' >> /etc/rc.conf.local
  - /etc/rc.d/qemu_ga stop && /etc/rc.d/qemu_ga start
  - ftp -o /tmp/xserv73.tgz https://ftp.fr.openbsd.org/pub/OpenBSD/7.3/amd64/xserv73.tgz 
  - tar xzvphf /tmp/xserv73.tgz
  - rcctl enable xenodm
  - rcctl enable messagebus
  - rcctl enable avahi_daemon
  - rcctl enable multicast
  - usermod -G operator openbsd
  - echo "LANG=en_FR.UTF-8" >> /home/openbsd/.xsession
  - echo "LC_TYPE=en_FR.UTF-8" >> /home/openbsd/.xsession
  - echo "LC_ALL=en_FR.UTF-8" >> /home/openbsd/.xsession
  - echo "exec ck-launch-session mate-session" >> /home/openbsd/.xsession
EOF

To pass the configuration to our VM we enter the command:

Code:
qm set 112 --cicustom "vendor=PVE1:snippets/ci-obsd73-xbase-gpu.yaml"

We start the VM, leaving the initialization process running (you can check progress via Proxmox's noVNC console):

Code:
qm start 112

Once initialization is complete, we connect via SSH (with user openbsd) to our VM to reset the Cloud-init configuration:

Code:
ssh openbsd@XXX.XXX.XX.XX
# once connected to the VM 
rm -f /var/log/cloud-init.log && rm -Rf /var/lib/cloud/*


Now shutting down the VM and, before moving on to template conversion, I add the graphics card to the configuration:

Code:
qm shutdown 112
qm set 112 --hostpci0 0000:01:00,x-vga=1
qm template 112

Deployment from template​


Now all you have to do is clone our OpenBSD template for each new VM.


Code:
qm clone 112 113 --full 1 --name obsd-mate

Now that we've "reset" our previous cloud-init configuration, we can add new parameters for the clone

Code:
qm set 113 --ipconfig0 ip=192.168.2.76/24,gw=192.168.2.1 --sshkey /PATH/TO/ANOTHER/id_rsa.pub
qm start 113

As we saw earlier, adding a user directly from Cloud-init remains problematic, which is why I recommended you stick with the default user 'openbsd'. Simply go through SSH to define a new password, before connecting to the Mate session.

Ressources :​

- https://pve.proxmox.com/pve-docs/qm.1.html
- https://pve.proxmox.com/wiki/VM\_Templates\_and\_Clones
- https://ftp.openbsd.org/pub/OpenBSD/7.3/amd64/INSTALL.amd64
- https://bsd-cloud-image.org
- https://www.openbsd.org/papers/bcrypt-paper.pdf
- https://cloudinit.readthedocs.io/en/latest/reference/examples.html