Recent content by dignus

  1. D

    Migration of PBS to another server

    Root is not on ZFS. Target disks are smaller than source. But I can work with rsync around that. If I do a fresh install and physically move the datastore to the new machine, will all settings be restored magically?
  2. D

    Migration of PBS to another server

    I currently have an 8 drive PBS server. But we're getting to the max of what it can take. So we need to move to a larger chassis. The data is on ZFS disks, so that part is taken care off. Two questions: - How I do migrate all other PBS settings/logs etc. - How to import the existing dataset...
  3. D

    Recommendation for Datacenter grade switch for running Ceph

    Sure. But on the used market you will find that they might be more expensive than it's bigger brother :) If I had the choice I'd go for SN2700's, very attractive price point. Overkill? Yeah, for now, you never know in the future. You can get break out cables to 4 x 25.
  4. D

    Recommendation for Datacenter grade switch for running Ceph

    Yeah, but not stacked as in Cisco or FS style. You configure both switches with an MLAG In between them. After that you create a port channel for each port on each switch. We have this running in production. If you go the used route, keep in mind you want to replace the disk.
  5. D

    Classify Cluster Node Roles?

    You could use HA groups to do this.
  6. D

    Disabling root GUI access

    I don't know if you can disable it in the gui, but whatever you do, keep the system root user alive. It's used for a lot of things in a cluster.
  7. D

    NVMe over TCP support?

    iSCSI vs NVMEoTCP is just a different protocol, functionality is the same.
  8. D

    Concern with Ceph IOPS despite having enterprise NVMe drives

    I see. Can't help you with that, only saying we've had great experience with vita, it's performance is phenomenal.
  9. D

    Concern with Ceph IOPS despite having enterprise NVMe drives

    It's basically a rewrite of ceph. Smaller and way better when it comes to performance. Performance compasrison can be found here https://vitastor.io/en/docs/performance/comparison1.html. Somewhat older versions, but the differnce is still as big. Been using it for a while now, very happy with it.
  10. D

    Concern with Ceph IOPS despite having enterprise NVMe drives

    You could try vitastor, runs circles around ceph when it comes to performance.
  11. D

    Node randomly reboots

    Nothing in the output of ipmitool sel list ?
  12. D

    Proxmox Ceph cluster - mlag switches choice -

    Not specific to Ceph, but for one cluster I went the used Mellanox route, SN2410's. Dirt cheap on ebay, relatively, very low latency. Needed 2 sets of these, instead I bought 6. 2 hot spares for 2 sets is enough redundancy :) 48x25 & 8x100 gbit each.
  13. D

    Which Shared Storage for 2 node cluster

    Never ever do consumer grade disks. You’ll be disappointed. Only question is when.