The resolving commit for mentioned vioscsi (and viostor) bug was merged 21 Jan 2026 into virtio master (commit cade4cb, corresponding tag mm315).
So if the to-be-released version will be tagged as >= mm315, the patch will be there.
As of me...
If you check the bugtracker, there is a patch available as of today to be code reviewed: https://bugzilla.proxmox.com/show_bug.cgi?id=7271#c6
Once reviewed and applied, it will be part of one of the next versions of the pve-container package...
Nothing has changed regarding this afaik. You need switchport with multiple vlans on it (vlan trunk), then create a vmbr0 with a bridged port (that connects to your switchport that(s) with the vlans on it). Then USE the sdn, as its the most...
Dear Proxmox-Community, we are asking for your support.
The European Commission has opened a Call for Evidence on the initiative European Open Digital Ecosystems, an initiative that will support EU ambitions to secure technological sovereignty...
I personally would look for journalctl -u pveproxy.service limiting with --since to the time-frame where you had that error. If it was 5 minutes ago for example it is: journalctl -u pveproxy.service --since "5 minutes ago"
A quick way to look at the file without all the extensive comments:
grep -vE '^\s*(#|$)' /etc/lvm/lvm.conf
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
multipath_component_detection = 0 needed to be set to 1 in /etc/lvm/lvm.conf - customer set the parameter, which caused this error. Thank you all for your help, I learned alot about iSCSI. Even created my own iSCSI-Target in a debian-vm to test...
Thanks to everyone for the help.
I managed to fix the reboot loop on my Dell PowerEdge R730xd after upgrading to Proxmox VE 9.
The issue was caused by the newest PVE kernel (6.17.x) triggering Machine Check Exceptions on this hardware (older...
Quincy is already EOL upgrade to Reef and then maybe even to squid. The Bug you mentioned is only shown for ceph 18 at least on clysos side. But there are other for ceph 19. I personally dont have any issues with 19.x so far.
17 and 18 are already EOL, why not upgrade to 19?
There is https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid and should be a doc for 18 but I’m not seeing it.
We also did 15 upgrades from ceph 19 to 20 in nested pve-environments (all clean installs from training) all went fine without issues. The only thing which was not 100% clear howto check what mds-services are standby via cli:
We looked at the...
You could also boot the debug mode from the installer and try to capture the logs, the debugging mode also helps to identify on which step exactly in the installer the reboot will be triggerd.