Search results

  1. L

    XCP-ng 8.0 and Proxmox VE 6.1

    I cannot tell for VMware and HyperV, I have no experience there. I can just tell about all the way I come from Xen.. been using xen since the first Kernelpatches back in the early 2000's, been using Xenserver and been using XCP1.x, XS6.2->7.2 and XCP-NG in medium Size Cluster settings...
  2. L

    VZDump slow on ceph images, RBD export fast

    Yeah, I extended it a bit, to make it cluster-aware and issue fsfreeze-freeze before and fsfreeze-thaw after snapshotting: https://github.com/lephisto/cv4pve-barc/
  3. L

    VZDump slow on ceph images, RBD export fast

    @spirit @ozdjh the guest cpu does not only spike, i get soft lockups etc, stuff you don't want to have. Proxmox internal backup solution is currently scuffed and unusable for production, that's why i have to handcraft a rbd solution.
  4. L

    VZDump slow on ceph images, RBD export fast

    My few Cent on this.. Just getting like 300MB/s w/ rbd export, and 10-30MB/s w/ vzdump. Snapshot backups even bring my guest to their knees. Total disaster. What do I miss here? (Latest dist-upgrade, 3-node Epyc Cluster, All-10g)
  5. L

    fsync performance oddities?

    Is there any news on this? From my understanding the "SIMD Patch" that Proxmox integreated is to disable simd, any clarification on this? I'm on 6.1 with zfs 0.8.2-pve2 and still far away from what I should see on performance. Huuge IO wait waste.
  6. L

    CEPH: outdated OSDs after minor upgrade

    Maybe this bump hasn't made it into all packages? >> https://code.forksand.com/proxmox/ceph/src/branch/master/patches/0014-bump-version-to-14.2.4.1.patch
  7. L

    CEPH: outdated OSDs after minor upgrade

    Nope. Same on several Workstations / Sessions.
  8. L

    CEPH: outdated OSDs after minor upgrade

    I booted several times now, restarted monitors (no mds since i don't have cephfs), restarted osd's, restarted ceph-osd.target on all nodes. Still the same. //edit: btw i checked with dpkg, all ceph* packages are on 14.2.4.1
  9. L

    CEPH: outdated OSDs after minor upgrade

    Hi Mike, thanks for answering. The problem is not the outdated OSDs but a Version mismatch between Host and OSD. OSDs are newer and don't match what Proxmox accounts for the host. Reboot doesn't do anything (hosts have been rebooted after ceph upgrade anyways).
  10. L

    CEPH: outdated OSDs after minor upgrade

    Hi, i just ran into an issue after updating PVE / Ceph today: The Ceph Packages were upgraded from 14.2.4 to 14.2.4.1. Everything works, pool is healthy, just the UI is showing "outdated OSDs", because the Ceph Nodes still thing they're 14.2.4 but the OSDs are 14.2.4.1: What do I miss here...
  11. L

    "pveceph purge" error with unable to get monitor info

    Exactly not what I wanted to read. I ended up in doing the same..
  12. L

    "pveceph purge" error with unable to get monitor info

    @rudysp I have exactly the same mess. Did you fix it?
  13. L

    Reinstall CEPH on Proxmox 6

    Is there any news on this how to get Ceph to a sane state without reinstalling everything?
  14. L

    Why does maxdisk only account boot disk ?

    Hello Proxmox, This behaviour also seems to be the case when exporting Metrics to InfluxDB. A VM with two disks only gets scsi0 counted. This is bad, because somehow I want to have a calculation of allocated Space vs available Space to raise an alert avoiding the Risk to overprovision a CEPH.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!