Search results

  1. A

    Proxmox cores with high memory

    We run single VM's with 140+ cores and 3TB of ram without any issues. What is the guest OS?
  2. A

    Io_uring + ext4 + data=journal causes filesystem errors

    Packages from the non subscription repo's seem to have resolved this issue. Appreciate the input.
  3. A

    Io_uring + ext4 + data=journal causes filesystem errors

    Running into a interesting issue with the following pve-manager/7.1-6/4e61e21c (running kernel: 5.13.19-1-pve). This is a standard 3 node cluster with HP DL 380 Gen10's and Nimble Hybrid iSCSI Storage. We use data=journal within our guests as server crashes can be harsh on our database...
  4. A

    Best option for getting HA to work across two servers?

    Probably just a simple zfs setup with replication. Really depends on your requirements for HA failover and performance.
  5. A

    Proxmox VE 7.1 released!

    Nice work guys. Looking forward to the new scheduler! I was really hoping for zfs replication to work with encrypted zfs zpool's!
  6. A

    ZFS Replication and Encryption

    Will proxmox ever be able to replicate encrypted ZFS datasets? This seems like a pretty big missing feature that isn't exactly well documented. I have been using syncoid but it has its own set of issues that keep corrupting my ZFS datasets.
  7. A

    [SOLVED] Replication & Migration encrypted zfs datasets

    I agree, this should be made very clear. I am using Syncoid successfully and it works pretty well.
  8. A

    [SOLVED] Replication & Migration encrypted zfs datasets

    Bumping this one to see if there has been any progress. This is a pretty big caveat that should be added to the wiki/documentation. Is this a issue for pve-zsync as well?
  9. A

    Any to test latest ceph version?

    This isn't a procedure issue, its a bug in 15.2.8 from the looks of it. Currently working on compiling the pacific branch from source.
  10. A

    Any to test latest ceph version?

    Both. The RBD can be primary or secondary and the RBD snapshots on the secondary are still bad. Snapshots on the primary always look good.
  11. A

    Any to test latest ceph version?

    Yep I can promote the image on the secondary and the image looks good. Its just the clone of the snapshots on the secondary that I run into the issues.
  12. A

    Any to test latest ceph version?

    Hey guys I have been doing considerably testing with Ceph Octopus on Proxmox and rbd-mirror snapshots to a remote cluster. Overall its going pretty well, but I am running into some odd issues with clones on the secondary cluster...
  13. A

    RBD-Mirror Starting Issues

    If I removed "--id %i" from the service config file, it starts and looks good. Snapshots are moving!
  14. A

    RBD-Mirror Starting Issues

    I am having some issues getting RBD-Mirror to start. Both clusters are brand new using Octopus. Jan 19 09:09:26 Bunkcephtest1 rbd-mirror[7085]: 2021-01-19T09:09:26.586-0500 7fcf6abd0700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2] Jan 19...
  15. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    I can confirm it has solved my issue as well. Appreciate the assistance!
  16. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    I have done a bit more digging. If I blacklist the be2iscsi module my issues are resolved. So far working on all the Gen9's I have updated.
  17. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    If 5.4.65 is your 2nd kernel you would do the following. vi /etc/default/grub Change the following line from GRUB_DEFAULT=0 to GRUB_DEFAULT="1>1" Now run "update-grub" then "update-initramfs -u" Reboot and you should be good to go.
  18. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    1. All the iLO and Bios related updates are recent on my one test machine. 2. I did test with the 5.4.78 kernel from testing, and it was the same issue. 5.4.65 does work.
  19. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    Updated 3 other HP DL 380 Gen9's that we have inhouse, but aren't using HP MSA storage. Same exact issue. So far all my HP Gen10's aren't doing this. I have some vanilla Super Micro hardware that seems aok as well. Trying to find a HP Gen8 inhouse but I don't think we have one anymore.
  20. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    Well I take that back. After mask'ing "systemd-udev-settle.service" I then noticed that "ifupdown-pre.service" was also failing. I masked "ifupdown-pre.service" and now the server is booting with networking as expected. However it is still throwing the kernel oops and its booting very slow...