Search results

  1. A

    ISCSI configuration issue

    This is more then true IF you try to use a volume on multiple hosts SIMULTANEOUSLY, in which case its not "good practice" its downright forbidden. There are ways to accomplish this with central metadata management (eg, lustre) but not just attached to an unmanaged host. iSCSI LUNs are...
  2. A

    Install Proxmox on Debian 12 Desktop

    I dont think having a GDM conflicts with pve; install it and report back :)
  3. A

    Ceph - power outage and recovery

    Hate to break it to you, but you only have one pair of interfaces in a lagg; while I cant see what speed the underlying interfaces are connected at, they will not be different per vlan. You are also comingling this same bond for all your disparate traffic types (ceph private, ceph public...
  4. A

    Ceph - power outage and recovery

    at this point it may be worthwhile to see how your network is set up. Do you want to post the content of your /etc/network/interfaces for your nodes, and describe how they are physically interconnected?
  5. A

    Ceph - power outage and recovery

    looking at your layout... you are BRAVE. I wouldnt go to production with such a lopsided deployment, and without any room to self heal. brave is a... diplomatic word.
  6. A

    Ceph librbd vs krbd

    Can you provide the actual test syntax? (not sure if the result is in ms/q or q/s.) ???
  7. A

    PureStorage FlashArray + Proxmox VA + Multipath

    To add to that- I was initially very excited about Veeam integration until I actually deployed it. It is very much alpha level imo, and I ended up rolling up a pbs instance in my environment despite having a Veeam store already in place (We're a Veeam partner.) This may change with a 2.0...
  8. A

    Ceph - power outage and recovery

    Fix that problem first. Why are you running out of memory?
  9. A

    Unresponsive server due to root disk full (ZFS)

    look for processes in uninterruptible sleep state, eg ps -eo ppid,pid,user,stat,pcpu,comm,wchan:32 | grep " D" post the output if you need further troubleshooting assistance.
  10. A

    Ceph - power outage and recovery

    1. why is your mgr down? 2. post the content of ceph osd tree and ceph osd df (if you even can) 3. would be good to know crush rules for all 14(!!) of your pools. (why do you have 14 pools?!) Generally speaking, it looks like you dont have enough OSDs for all your pgs. what you provide will...
  11. A

    Installation bleibt stehen

    troubleshooting USB boot is always the same: 1. use dd to directly copy the iso to the USB (eg, etcher on windows) 2. If it fails to fully boot, use a different USB device 3. if it fails to fully boot, use a different USB port on the host. 4. if is fails to fully boot, its PROBABLY the host...
  12. A

    Remove BTRFS Mount

    for more explicit assistance, post the output of cat /etc/mtab lsblk
  13. A

    Installation bleibt stehen

    dont use rufus to burn pve isos. use etcher.
  14. A

    PVE 8.3.5 Clustering and HA

    Generally speaking, an even number of nodes doesn't pose a problem, its just that with 4 nodes your minimum for quorum becomes three nodes instead of two since quorum requires round(n/2+1).
  15. A

    NVMe PCIe 5 benchmarking

    Thanks for the detailed benchmark! As for your question, it seems to me that there are only two sane ways to utilize the storage: zfs striped mirror (or two seperate mirrors) mdadm striped mirror (or two seperate mirrors) mdadm+lvm will be faster. known fact. zfs has more features (inline...
  16. A

    Ceph placement group remapping

    Its probably because the crush rule governing it cannot be satisfied.
  17. A

    mst start returns 'Module mst_pci not found in directory /lib/modules/6.8.12-9-pve'

    Installing OFED on debian is not done using apt directly. Once you decompress the tarball there is an installation script that has to run. It will prompt you to install its prerequisite packages before it will complete installation. Also, in some versions it will conflict with the proxmox-ve...
  18. A

    Server no longer booting

    sounds like the problem is your computer, not the storage.
  19. A

    Ceph placement group remapping

    your config is unworkable. While you didnt provide your actual crush rules, I can already see they can never be satisfied. Consider: you have 3 nodes. node pve2 15.25TB HDD, 1.83TB SSD node pve3 7.27TB HDD, 0.7TB SSD node pve4 0.5 HDD, 0.9 SSD For your replication profile, even assuming you...