MicroVM support

No plans, because we have LXC containers with much lower ressource usage ...

But I guess it is possible to run such MicroVMs inside LXC or qemu VMs.
 
I tried to find some self-hosting equivalent of Fargate / Lambda, for example in Proxmox and I found this post.
At first sight, Firecracker looks good. @ewaldo , @spirit , thank you!
P.S.: I really love LXC but there are not a lot of API and tools to orchestrate them. For example, no official Ansible dynamic inventory for Proxmox.
 
  • Like
Reactions: zoechi
I see this is a long dead there but we would like to be able to use microvm's for the additional isolation they provide paired with the near-contrainer-like spin time and(?) the cloud-init support.

My last comment has enough +1s that I guess others might find this useful. I intend to investigate the microvm thing and I will post to the forums if I make any progress.
 
Hi,


I’d like to express interest in native microVM support within Proxmox VE.
My use cases are based on the work described here:
https://wiki.netbsd.org/users/imil/microvm/
https://smolbsd.org/

In short, I’m using NetBSD microVMs for lightweight, reproducible workloads with very fast boot times and minimal device emulation.
From an operational standpoint, being able to launch such VMs directly from the PVE web console or API (without manual QEMU command-line configuration) would be ideal.


The absence of live migration is not a blocker in my scenario — microVMs can simply reboot on another node if needed.


Is there any technical or roadmap constraint that prevents this from being integrated?


Also, @steevestroke — did you manage to make any progress or practical use since your last post?


Thanks.
 
Last edited:
Following a few tests, I noticed that Proxmox automatically injects PCI bridges when starting a VM, which causes QEMU to fail under the microvm machine type (since it doesn’t expose any PCI bus).


For reference, here’s the configuration I used
Code:
args: -M microvm,rtc=on,acpi=off,pic=off,accel=kvm  -kernel /mnt/pve/test-microvm/500/netbsd-SMOL  -append 'console=com root=ld0a -z'  -global virtio-mmio.force-legacy=false  -fsdev local,path=/tmp/smolBSD,security_model=none,id=shar-RjSyDcAx0  -device virtio-9p-device,fsdev=shar-RjSyDcAx0,mount_tag=shar-RjSyDcAx0  -device virtio-blk-device,drive=hd-RjSyDcAx0  -drive if=none,file=/mnt/pve/test-microvm/500/sshd-amd64.img,format=raw,id=hd-RjSyDcAx0  -device virtio-net-device,netdev=net-RjSyDcAx0  -netdev user,id=net-RjSyDcAx0,ipv6=off,hostfwd=::2022-:22  -display none
balloon: 0
boot:
cores: 1
cpu: host
memory: 512
meta: creation-qemu=9.2.0,ctime=1762764221
name: microvm-netbsd
numa: 0
smbios1: uuid=0d1970d6-d7dc-42c0-aa7b-cf217969c0ad
sockets: 1
tablet: 0
vga: none
vmgenid: 5b986f8e-573a-4551-9846-2665f7ea54cc


When starting, QEMU fails with:


Code:
qm start 500
kvm: -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e: Bus 'pci.0' not found
start failed: QEMU exited with code 1

That bridge is generated automatically by Proxmox and shouldn’t be added when machine: microvm.

Would it be possible to wrap PCI bridge creation in something like if (!$is_microvm) so that these devices aren’t added when the machine type is microvm?
 
Last edited: