I cannot tell for VMware and HyperV, I have no experience there.
I can just tell about all the way I come from Xen.. been using xen since the first Kernelpatches back in the early 2000's, been using Xenserver and been using XCP1.x, XS6.2->7.2 and XCP-NG in medium Size Cluster settings...
Yeah, I extended it a bit, to make it cluster-aware and issue fsfreeze-freeze before and fsfreeze-thaw after snapshotting:
https://github.com/lephisto/cv4pve-barc/
@spirit @ozdjh the guest cpu does not only spike, i get soft lockups etc, stuff you don't want to have.
Proxmox internal backup solution is currently scuffed and unusable for production, that's why i have to handcraft a rbd solution.
My few Cent on this.. Just getting like 300MB/s w/ rbd export, and 10-30MB/s w/ vzdump. Snapshot backups even bring my guest to their knees. Total disaster. What do I miss here? (Latest dist-upgrade, 3-node Epyc Cluster, All-10g)
Is there any news on this?
From my understanding the "SIMD Patch" that Proxmox integreated is to disable simd, any clarification on this?
I'm on 6.1 with zfs 0.8.2-pve2 and still far away from what I should see on performance. Huuge IO wait waste.
Maybe this bump hasn't made it into all packages?
>> https://code.forksand.com/proxmox/ceph/src/branch/master/patches/0014-bump-version-to-14.2.4.1.patch
I booted several times now, restarted monitors (no mds since i don't have cephfs), restarted osd's, restarted ceph-osd.target on all nodes. Still the same.
//edit: btw i checked with dpkg, all ceph* packages are on 14.2.4.1
Hi Mike,
thanks for answering.
The problem is not the outdated OSDs but a Version mismatch between Host and OSD. OSDs are newer and don't match what Proxmox accounts for the host. Reboot doesn't do anything (hosts have been rebooted after ceph upgrade anyways).
Hi,
i just ran into an issue after updating PVE / Ceph today:
The Ceph Packages were upgraded from 14.2.4 to 14.2.4.1. Everything works, pool is healthy, just the UI is showing "outdated OSDs", because the Ceph Nodes still thing they're 14.2.4 but the OSDs are 14.2.4.1:
What do I miss here...
Hello Proxmox,
This behaviour also seems to be the case when exporting Metrics to InfluxDB. A VM with two disks only gets scsi0 counted. This is bad, because somehow I want to have a calculation of allocated Space vs available Space to raise an alert avoiding the Risk to overprovision a CEPH.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.