Appreciate the new release, but a lot of people are anxiously awaiting zfs 2.3 features for PVE. If it takes more than 3-4 months to make it into proxmox Testing, the clamor is gonna start to get loud
Appreciate the new release, but a lot of people are anxiously awaiting zfs 2.3 features for PVE. If it takes more than 3-4 months to make it into proxmox Testing, the clamor is gonna start to get loud
virtiofs is specific for qemu thus virtual machines, thus you can't use it in containers. It's quite unlikely that this will ever change since bind mounts are essentially virtiofs for lxc containers. I understand your feelings in regard how cumbersme dealing with mounts inside containers is but why dont' you use a vm instead? Most typical applications can be run from a docker container and if you put everything in one vm this doesn't need to use more resources than lxc containers. Another benefits: You have one vm where you need to do maintenance (system update etc) and won't run into issues with nested containers (like when you use docker containers inside a lxc which isn't recommended by the Proxmox developers). I personally use lxc containers only for stuff, which doesn't need any mounts (e.G. pihole) or if it needs hardware passthrough (like for using the igupu for transcoding in jellyfin or plex).
The GPU share with LXC its not exclusive, you can share a GPU with multiple LXC already now.Awesome news. One feature I'd like to see is an easier way to share GPU (or partials) with LXC for things like LLMs so I don't have to dedicate a whole card to an LXC that may not be running all the time. Maybe this is already possible but it seems it's a cumbersome process with some mapping user/groups and all this rigamarole that I could never quite get working... If there's a more recent guide to set this up, please point me at it.
while virtiofs is specific to Qemu, we do plan on extending the "directory mapping" mechanism to provide a bind mount feature to non root users as well. integrating idmapping into that (using the same maps as the container itself) should be doable for most setups, there's been some development on the kernel side that should make this a lot easier and more robust recently.Any chance of virtiofs being available for containers? would love an easier alternative for unprivileged containers than bind mounts. Getting the uid/gid mappings correct is a nightmare.
as the note says, only OSDs created using specific versions of Ceph 19/Squid are affected. there's no automatic Ceph major version upgrade when upgrading within a PVE release, so you won't automatically switch to Ceph 19 in any case. see https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid for how to do that upgrade.I do have a question. We were suppose to do scheduled maintenance this coming Saturday the 12th based on 8.3.6 Subscribed Enterprise repo (lab tested). This a production HA cluster. However, we're concerned now with the Ceph part that fails stated under known issues.
https://pve.proxmox.com/wiki/Roadmap#8.4-known-issues
Our Ceph version is currently 18.2.2 and our current PVE version is 8.2.4.
Will there be any implications? Is it better to hold off or is 8.4 not on Enterprise yet? Can we leave Ceph alone and not upgrade it?
Advise on this would be greatly appreciated.
Johannes reply is on point, that said, now with the directory mappings we got a safe way to allow admins to configure which host directories certain users can access through the integrated PVE access control system.Any chance of virtiofs being available for containers? would love an easier alternative for unprivileged containers than bind mounts. Getting the uid/gid mappings correct is a nightmare.
No, this was evaluated in the past, but we do not see any realistic way for this to be possible anytime soon.BTW, is there a plan to do live-migration with no-downtime for LXC containers?
especially as it would mostly benefit use cases where much better alternatives exist, like here: just use VMs, they have a sane and stable interface that can support live-migration as QEMU holds all the VM state already in user space and KVM, the kernel side, supports serializing and loading the little bit of kernel state that VMs got.
Known Issues & Breaking Changes
PXE boot on VM with OVMF requires VirtIO RNG
Due to security reasons, the OVMF firmware now disables PXE boot for guests without a random number generator.
If you want to use PXE boot in OVMF VMs, make sure you add a VirtIO RNG device. This is allowed for root and users with the VM.Config.HWType privilege.
could you be more specific what you mean with "there's been some development on the kernel side that should make this a lot easier and more robust recently."?while virtiofs is specific to Qemu, we do plan on extending the "directory mapping" mechanism to provide a bind mount feature to non root users as well. integrating idmapping into that (using the same maps as the container itself) should be doable for most setups, there's been some development on the kernel side that should make this a lot easier and more robust recently.
while virtiofs is specific to Qemu, we do plan on extending the "directory mapping" mechanism to provide a bind mount feature to non root users as well. integrating idmapping into that (using the same maps as the container itself) should be doable for most setups, there's been some development on the kernel side that should make this a lot easier and more robust recently.
We use essential cookies to make this site work, and optional cookies to enhance your experience.