Search results

  1. L

    ZFS 0.8.0 Released

    .. there is no heavy load on them. Other Cluster aware Filesystems migrate stuff smoothly in background and don't bascially kill your whole IO.
  2. L

    ZFS 0.8.0 Released

    Mean this? already done.. root@newton:~# zfs get recordsize six NAME PROPERTY VALUE SOURCE six recordsize 128K default root@newton:~# zfs get recordsize six NAME PROPERTY VALUE SOURCE six recordsize 128K default
  3. L

    ZFS 0.8.0 Released

    This is how it looked like migrating from one pool to another. Basically rendering the whole machine hardly responsive.. six = zfs mirror without slog and unencrypted fatbox = zfs mirror with slog and encrypted
  4. L

    ZFS 0.8.0 Released

    There are no zvol's involved on the other machine, just a ZFS that makes some exports. But on the other hand it's much weaker in terms of CPU (6y old HP Microserver with AMD Turion cpu only). I know it can't be compared 1:1, but I am pretty sure that what I see is not normal. It can't be. Still...
  5. L

    ZFS 0.8.0 Released

    Yeah sure. I have other setups with some aged Debian + zfs 0.7x, that outperformance the crippled ZFS in Proxmox by far. I'll send stats in some mins since there's a live migration still ongoing..
  6. L

    ZFS 0.8.0 Released

    Two 10TB Drives as Mirror with a Optane as ZIL. But it doesn't matter if a zil is there or not. Really simple homelab setup.
  7. L

    ZFS 0.8.0 Released

    ZFS is still awefully slow with me. ( Trying to run some low load vm's with a few Containers inside from a encrypted ZFS kills my Host for minutes. I have even thrown a Intel Optane on it. There's something seriously flawed with ZFS on Proxmox 6.1-5/9bf061 (Ryzen 2700x/64gb ECC/WD-Red + Optane...
  8. L

    Is backup faster if VM is turned off ?

    Currently you won't get around using the CLI, if you want fast Backups of RDB Images. You might want to take a look at https://github.com/lephisto/cv4pve-barc/tree/master or the project is was forked from. It s work in progress, but provides me with some blazing fast RDB Image backups. I...
  9. L

    XCP-ng 8.0 and Proxmox VE 6.1

    Adding nodes in terms of additional Clusterhosts is more or less them same procedure. Bootstrap a new host, configure network, enter a Cluster join key (or root credentials on XCP). Sync is done with corosync on Proxmox and xapi on Xen. The Cloudinit support with qemu/kvm/proxmox seems more...
  10. L

    XCP-ng 8.0 and Proxmox VE 6.1

    I cannot tell for VMware and HyperV, I have no experience there. I can just tell about all the way I come from Xen.. been using xen since the first Kernelpatches back in the early 2000's, been using Xenserver and been using XCP1.x, XS6.2->7.2 and XCP-NG in medium Size Cluster settings...
  11. L

    VZDump slow on ceph images, RBD export fast

    Yeah, I extended it a bit, to make it cluster-aware and issue fsfreeze-freeze before and fsfreeze-thaw after snapshotting: https://github.com/lephisto/cv4pve-barc/
  12. L

    VZDump slow on ceph images, RBD export fast

    @spirit @ozdjh the guest cpu does not only spike, i get soft lockups etc, stuff you don't want to have. Proxmox internal backup solution is currently scuffed and unusable for production, that's why i have to handcraft a rbd solution.
  13. L

    VZDump slow on ceph images, RBD export fast

    My few Cent on this.. Just getting like 300MB/s w/ rbd export, and 10-30MB/s w/ vzdump. Snapshot backups even bring my guest to their knees. Total disaster. What do I miss here? (Latest dist-upgrade, 3-node Epyc Cluster, All-10g)
  14. L

    fsync performance oddities?

    Is there any news on this? From my understanding the "SIMD Patch" that Proxmox integreated is to disable simd, any clarification on this? I'm on 6.1 with zfs 0.8.2-pve2 and still far away from what I should see on performance. Huuge IO wait waste.
  15. L

    CEPH: outdated OSDs after minor upgrade

    Maybe this bump hasn't made it into all packages? >> https://code.forksand.com/proxmox/ceph/src/branch/master/patches/0014-bump-version-to-14.2.4.1.patch
  16. L

    CEPH: outdated OSDs after minor upgrade

    Nope. Same on several Workstations / Sessions.
  17. L

    CEPH: outdated OSDs after minor upgrade

    I booted several times now, restarted monitors (no mds since i don't have cephfs), restarted osd's, restarted ceph-osd.target on all nodes. Still the same. //edit: btw i checked with dpkg, all ceph* packages are on 14.2.4.1
  18. L

    CEPH: outdated OSDs after minor upgrade

    Hi Mike, thanks for answering. The problem is not the outdated OSDs but a Version mismatch between Host and OSD. OSDs are newer and don't match what Proxmox accounts for the host. Reboot doesn't do anything (hosts have been rebooted after ceph upgrade anyways).
  19. L

    CEPH: outdated OSDs after minor upgrade

    Hi, i just ran into an issue after updating PVE / Ceph today: The Ceph Packages were upgraded from 14.2.4 to 14.2.4.1. Everything works, pool is healthy, just the UI is showing "outdated OSDs", because the Ceph Nodes still thing they're 14.2.4 but the OSDs are 14.2.4.1: What do I miss here...
  20. L

    "pveceph purge" error with unable to get monitor info

    Exactly not what I wanted to read. I ended up in doing the same..