Proxmox VE 8.4 released!

Appreciate the new release, but a lot of people are anxiously awaiting zfs 2.3 features for PVE. If it takes more than 3-4 months to make it into proxmox Testing, the clamor is gonna start to get loud
 
Congratulations to the entire Promox Team! Congratulations to all the users of Proxmox. This looks like an awesome release!
We're near completion of an Automated Provisioning Platform specificlaly for Proxmox, and I cannot wait try it out on this! Looking forward to upgrading to Ceph 19!!!!
 
virtiofs is specific for qemu thus virtual machines, thus you can't use it in containers. It's quite unlikely that this will ever change since bind mounts are essentially virtiofs for lxc containers. I understand your feelings in regard how cumbersme dealing with mounts inside containers is but why dont' you use a vm instead? Most typical applications can be run from a docker container and if you put everything in one vm this doesn't need to use more resources than lxc containers. Another benefits: You have one vm where you need to do maintenance (system update etc) and won't run into issues with nested containers (like when you use docker containers inside a lxc which isn't recommended by the Proxmox developers). I personally use lxc containers only for stuff, which doesn't need any mounts (e.G. pihole) or if it needs hardware passthrough (like for using the igupu for transcoding in jellyfin or plex).

Yeah its tempting. I'm using the lxc for docker to run a media stack (plex + arrs) and it works pretty well apart from the bind mount for the media dir. I did have it properly mapped, uid and guid were correct in the lxc, but had weird access permissions errors with writing stuff to it. The src of the bind mount is a fuse mount on the px server, so that might be the problem. I moved the fuse mount to the lxc container and that works, but its not recommended and gives issues with backups.

I did originally run docker in a VM, but it did consume a lot more resources than the lxc.

I'll run some tests with a VM again, see if I can fine tune it.
 
I do have a question. We were suppose to do scheduled maintenance this coming Saturday the 12th based on 8.3.6 Subscribed Enterprise repo (lab tested). This a production HA cluster. However, we're concerned now with the Ceph part that fails stated under known issues.

https://pve.proxmox.com/wiki/Roadmap#8.4-known-issues

Our Ceph version is currently 18.2.2 and our current PVE version is 8.2.4.

Will there be any implications? Is it better to hold off or is 8.4 not on Enterprise yet? Can we leave Ceph alone and not upgrade it?

Advise on this would be greatly appreciated.
 
Awesome news. One feature I'd like to see is an easier way to share GPU (or partials) with LXC for things like LLMs so I don't have to dedicate a whole card to an LXC that may not be running all the time. Maybe this is already possible but it seems it's a cumbersome process with some mapping user/groups and all this rigamarole that I could never quite get working... If there's a more recent guide to set this up, please point me at it.
The GPU share with LXC its not exclusive, you can share a GPU with multiple LXC already now.
 
Any chance of virtiofs being available for containers? would love an easier alternative for unprivileged containers than bind mounts. Getting the uid/gid mappings correct is a nightmare.
while virtiofs is specific to Qemu, we do plan on extending the "directory mapping" mechanism to provide a bind mount feature to non root users as well. integrating idmapping into that (using the same maps as the container itself) should be doable for most setups, there's been some development on the kernel side that should make this a lot easier and more robust recently.
 
I do have a question. We were suppose to do scheduled maintenance this coming Saturday the 12th based on 8.3.6 Subscribed Enterprise repo (lab tested). This a production HA cluster. However, we're concerned now with the Ceph part that fails stated under known issues.

https://pve.proxmox.com/wiki/Roadmap#8.4-known-issues

Our Ceph version is currently 18.2.2 and our current PVE version is 8.2.4.

Will there be any implications? Is it better to hold off or is 8.4 not on Enterprise yet? Can we leave Ceph alone and not upgrade it?

Advise on this would be greatly appreciated.
as the note says, only OSDs created using specific versions of Ceph 19/Squid are affected. there's no automatic Ceph major version upgrade when upgrading within a PVE release, so you won't automatically switch to Ceph 19 in any case. see https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid for how to do that upgrade.
 
  • Like
Reactions: Askey307
Any chance of virtiofs being available for containers? would love an easier alternative for unprivileged containers than bind mounts. Getting the uid/gid mappings correct is a nightmare.
Johannes reply is on point, that said, now with the directory mappings we got a safe way to allow admins to configure which host directories certain users can access through the integrated PVE access control system.
So this effectively removes one of the biggest roadblocks for us to better integrate bind mounts for CTs in the API and UI, which would indeed reduce error potential with UID/GID mapping and so on.
 
BTW, is there a plan to do live-migration with no-downtime for LXC containers?
No, this was evaluated in the past, but we do not see any realistic way for this to be possible anytime soon.

If you're interested in some background: The main issue is that one needs to snapshot a process including all relevant kernel state (open connections, files, IO, ...) and restore that to another kernel that might not even be in the same version and not support exactly the same flags and features of that state. CRIU, the project that tries to lay the groundwork here, is barely able to catch up on providing support for very basic processes, those with network and all other features that any standard CT normally uses, are completely out of question, and the kernel is unlikely to ever gain a stable (internal) ABI which one could use for this, especially as it would mostly benefit use cases where much better alternatives exist, like here: just use VMs, they have a sane and stable interface that can support live-migration as QEMU holds all the VM state already in user space and KVM, the kernel side, supports serializing and loading the little bit of kernel state that VMs got.
 
especially as it would mostly benefit use cases where much better alternatives exist, like here: just use VMs, they have a sane and stable interface that can support live-migration as QEMU holds all the VM state already in user space and KVM, the kernel side, supports serializing and loading the little bit of kernel state that VMs got.

Another alternative would be to work around the problem. I know a guy, who does a lot of managed webhosting for his customers and he uses unpriviliged LXCs for that so he can put more customers on one server. I was first baffed due to the downtime problem. But for the usecase of him and his customers that's not an issue: Most of them can live with the minimal downtime (just a few seconds) of a container. For those who can't he has multiple containers behind a loadbalancer like nginx or haproxy. Then I had to admit that although personally I still would prefer VMs his setup absolutely makes sense for his goals :)
Maybe @Zubin Singh Parihar can do something similiar in his environments? E.g. run multiple lxcs on his PVE cluster nodes and a loadbalancer on a small VM (to reduce downtime) so in case of one downtime another lxc takes over?
Now the guy I met uses Incus, not Proxmox but in the end both are interfaces for kvm/qemu and lxcs so the technical issues are the same (thus downtime with lxcs, no downtime with vms).
 
Last edited:
  • Like
Reactions: t.lamprecht
Is anyone having troubles with their HBA passthrough on 8.4? I can see the drives upon boot (as long as my TrueNAS vm is offline), but as soon as it goes online you can no longer see it from the shell (most likely it handing it off to the vm) and then in the TrueNAS vm it shoots me an MPT 01h error and wont see the drives attached to the HBA at all. This worked fine prior to update and the issue persists across 6.8 and 6.14 kernels.
 
Release Notes:

Known Issues & Breaking Changes​


PXE boot on VM with OVMF requires VirtIO RNG​


Due to security reasons, the OVMF firmware now disables PXE boot for guests without a random number generator.

If you want to use PXE boot in OVMF VMs, make sure you add a VirtIO RNG device. This is allowed for root and users with the VM.Config.HWType privilege.

I tried OVMF VM without VirtIO RNG device and PXE (dhcp+tftp) was working. Is there any other special condition for this breaking change?
Maybe only PXE via HTTP(S)?
 
Running 8.4 for the last 8 hours, tried a few of my exotic VM's too & so far happy to report all is sailing along. I even performed a complete node reboot - with zero issues. (I still have a few LXCs to check - but haven't had the time as of yet).

Kudos again to an excellent product & an awesome team! :)
 
while virtiofs is specific to Qemu, we do plan on extending the "directory mapping" mechanism to provide a bind mount feature to non root users as well. integrating idmapping into that (using the same maps as the container itself) should be doable for most setups, there's been some development on the kernel side that should make this a lot easier and more robust recently.
could you be more specific what you mean with "there's been some development on the kernel side that should make this a lot easier and more robust recently."?
Especiall do you mean on proxmoxs kernel site or general linux, i have many problems with that, because of that i asked :)
 
I noticed when i first got 8.4.0 that adding NFS storage and choosing content, i got "Backup" instead of VZDUMP. i just installed 8.4.0 on another device, now it's back to VZDUMP, not backup


Screenshot 2025-04-10 102858.png
 
Last edited:
while virtiofs is specific to Qemu, we do plan on extending the "directory mapping" mechanism to provide a bind mount feature to non root users as well. integrating idmapping into that (using the same maps as the container itself) should be doable for most setups, there's been some development on the kernel side that should make this a lot easier and more robust recently.

Oh awesome, that would be perfect. Count me in for testing!