Recent content by alexskysilk

  1. A

    Concern with Ceph IOPS despite having enterprise NVMe drives

    I'm curious. is that using the same benchmark? IF you were to sustain 300k IOPs per drive, the only way you'd be able to realize that aggregate is using 12 initiators each talking directly to the drive in question. There is no free lunch. Marketing, meet reality, but I'm still confused. didnt...
  2. A

    Concern with Ceph IOPS despite having enterprise NVMe drives

    sounds about right. so you're pretty close. does this mean you're happy with the result? I'm a bit lost. Depending on how many PGs were engaged in the above test, you probably have room for improvement which would be realized if the initiators were fully separate. also sounds right :)...
  3. A

    Best Storage Setup for Proxmox VE on a Dedicated Server? (Performance Question)

    There would probably be negligible performance advantage for nvme. adding a drive and configuring in raid10 would result in substantial performance increase while retaining the same capacity a 3 drive r5 regardless of host bus technology. This is meaningless. Define the minimum criteria you...
  4. A

    Concern with Ceph IOPS despite having enterprise NVMe drives

    You are running a test with a queue depth of one, and no thread count (which means 1.) It doesn't matter how many drives you have, you can only realize a high fraction of a single disk's capability since that is all you are testing. The question I would be asking is- do you have an idea what...
  5. A

    Ceph - power outage and recovery

    Best I can tell is that the pg WAS served by osd 33 at some point, but isnt anymore. the remaining two shards dont agree, which prevents the subsystem from activating the pg. if you havent done so already, try ceph pg repair 4.370 It may take multiple tries, and it may not do anything anyway...
  6. A

    ISCSI configuration issue

    This is more then true IF you try to use a volume on multiple hosts SIMULTANEOUSLY, in which case its not "good practice" its downright forbidden. There are ways to accomplish this with central metadata management (eg, lustre) but not just attached to an unmanaged host. iSCSI LUNs are...
  7. A

    Install Proxmox on Debian 12 Desktop

    I dont think having a GDM conflicts with pve; install it and report back :)
  8. A

    Ceph - power outage and recovery

    Hate to break it to you, but you only have one pair of interfaces in a lagg; while I cant see what speed the underlying interfaces are connected at, they will not be different per vlan. You are also comingling this same bond for all your disparate traffic types (ceph private, ceph public...
  9. A

    Ceph - power outage and recovery

    at this point it may be worthwhile to see how your network is set up. Do you want to post the content of your /etc/network/interfaces for your nodes, and describe how they are physically interconnected?
  10. A

    Ceph - power outage and recovery

    looking at your layout... you are BRAVE. I wouldnt go to production with such a lopsided deployment, and without any room to self heal. brave is a... diplomatic word.
  11. A

    Ceph librbd vs krbd

    Can you provide the actual test syntax? (not sure if the result is in ms/q or q/s.) ???
  12. A

    PureStorage FlashArray + Proxmox VA + Multipath

    To add to that- I was initially very excited about Veeam integration until I actually deployed it. It is very much alpha level imo, and I ended up rolling up a pbs instance in my environment despite having a Veeam store already in place (We're a Veeam partner.) This may change with a 2.0...
  13. A

    Ceph - power outage and recovery

    Fix that problem first. Why are you running out of memory?
  14. A

    Unresponsive server due to root disk full (ZFS)

    look for processes in uninterruptible sleep state, eg ps -eo ppid,pid,user,stat,pcpu,comm,wchan:32 | grep " D" post the output if you need further troubleshooting assistance.