Search results

  1. D

    Node randomly reboots

    Nothing in the output of ipmitool sel list ?
  2. D

    Proxmox Ceph cluster - mlag switches choice -

    Not specific to Ceph, but for one cluster I went the used Mellanox route, SN2410's. Dirt cheap on ebay, relatively, very low latency. Needed 2 sets of these, instead I bought 6. 2 hot spares for 2 sets is enough redundancy :) 48x25 & 8x100 gbit each.
  3. D

    Which Shared Storage for 2 node cluster

    Never ever do consumer grade disks. You’ll be disappointed. Only question is when.
  4. D

    [SOLVED] Recover VM disks from ZFS

    See https://forum.proxmox.com/threads/zfs-zvol-import-to-existing-vm.133824/
  5. D

    Improve restore speed with Proxmox Backup Server

    Have you tried a higher MTU of like 9000?
  6. D

    Upgrade path / 2 tier PBS

    Currently we're running a PBS server containing 10 Samsung PM983 7.68 TB NVME drives. But.. we're coming to the point where we need to upgrade. Performance is excellent, but limited storage space. I have no extra bays available for more drives. The only option would be to upgrade each drive to a...
  7. D

    Remote sync slower than expected

    Sure. We found out one core in the ipsec endpoint (pfsense) on one side was running at 100% load and was limiting the transfer speed. After enabling MSS clamping (preventing fragmentation) and Asynchronous Cryptography (use multiple cores for multiple ipsec functions) transfer speed started to...
  8. D

    Remote sync slower than expected

    Little ashamed to say the issue was found and was not in PBS. The ipsec tunnel endpoints had some issues. Now that these are resolved we can completely fill the gbit connection.
  9. D

    Remote sync slower than expected

    Sure, see attachment. Just onder 1 TB.
  10. D

    Remote sync slower than expected

    It's a pull. Latency is about 6 to 7 ms.
  11. D

    Remote sync slower than expected

    We've been using PBS for over a year now, it meets all our needs, love it. However - somehow thee offsite sync is way slower than expected. We can't really find any obvious bottlenecks. Benchmark for the source host: Time per request: 6515 microseconds. TLS speed: 643.70 MB/s SHA256 speed...
  12. D

    Very flaky network with Intel X710

    It's been a while, but I think it was an option in the supermicro BIOS.
  13. D

    Update fails complaining about grub-efi-amd64

    Thanks @chris. However, that'll trigger the removal of grub-pc, is that correct? root@pbs:~# apt install grub-efi-amd64 Reading package lists... Done Building dependency tree... Done Reading state information... Done The following packages were automatically installed and are no longer...
  14. D

    Update fails complaining about grub-efi-amd64

    I just updated 2 PBS's to the latest versions, enterprise repo. Both machines fail with these lines as the last ones: update-initramfs: Generating /boot/initrd.img-6.8.4-2-pve W: No zstd in /usr/bin:/sbin:/bin, using gzip Running hook script 'zz-proxmox-boot'.. Re-executing...
  15. D

    Samsung PM983a Enterprise NVME: Failed to flush random seed file: Time out when using ZFS boot

    I don't recognize the error, but I use these drives in all my hosts without any issues. Maybe a firmware update would we worth giving a try?
  16. D

    Recommendation for Datacenter grade switch for running Ceph

    Not sure if the switch will help you on the IOPS side, but if you want new.. Arista. For used I can highly recommend Mellanox. You can get SN2410's for around $ 1500 on ebay (if you're lucky) giving you 48x25 gbit plus 8x100 gbit, very low latency. We've been running these in production for...
  17. D

    Reboot problem

    Looking at the logs I'd say this could have something to do with it: Jul 20 12:01:33 PVE kernel: pcieport 0000:00:1b.0: AER: Corrected error received: 0000:01:00.0 Jul 20 12:01:33 PVE kernel: nvme 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Jul 20...