Hello,
I just went an ran a dist-upgrade on my proxmox 6.4 install, which included the 2.0.5 zfs packages. The update hung while waiting for the zfs-volume-wait.server (zvol_wait) to run. I have encrypted zfs volumes that i don't load the key for, and it looks like the zvol_wait was waiting...
Hello Forum,
since a crash of the root disk of the virtualization host of one cluster here, we experience issues with the ceph mon service on that node:
We are running the following versions:
Do you have any idea how to bring back the "magic" ? ;-)
Thank you and best regards,
Nico
Hi All
Not sure if anyone has experienced this issue before.
We have an older host for pre production VM staging before moving them to newer hosts.
Old host PVE 6.1.7
New Host PVE 6.4
The Process.
Take final backup of VM using VZdump
Restore VM on new Cluster.
Original VM Hardware
VirtIO-...
I have single PVE server (6.4, community repo) and I noticed It has constant iodelay > 0 even when host is almost idle. The host is very low loaded, so not expected to see that.
There is single m2 NVMe disk on board (Samsung 970 EVO Plus) installed mostly as a test disk, and couple of Intel 545...
Hi, we just got our brand new Baby from Dell, a R6525 with a Dual Epyc 7543 (Milan) set of CPU's.
Since it will become part of our PVE cluster, i tried to install Proxmox 6.3 on it without success, the same with Proxmox 6.4.
The system gets stuck after the boot message, at least nothing...
Hallo zusammen,
uns ist aufgefallen, dass auf unserem 3-node Ceph-PVE-Cluster (PVE 6.4) auf jedem Knoten teils unterschiedliche Server Addresses verwendet werden: Server View -> Datacenter -> Summary -> Nodes.
Unsere Nodes haben natürlich mehrere IPs (GUI, Corosync, Ceph Public, Ceph Storage...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.