Recent content by Nemesiz

  1. N

    Ceph freeze when a node reboots on Proxmox cluster

    In maintenance time I set noout norebalance norecover flags before OSD/server shutdown. It stops from moving data around others OSD. In some Ceph talks was mentioned that single HDD can impact all cluster event HDD SMART will not show any evidence of coming HDD death. So you must track of disk...
  2. N

    Legacy-Boot with ZFS-Root on Supermicro X8DT3 (Proxmox VE 9.0, no UEFI)

    Proxmox installation is not so rich with option. Try to install debian as you wish and then convert to Proxmox.
  3. N

    ZFS pool won’t import after switching from /dev/sdx to /dev/disk/by-id – mixed vdev paths

    Hi, have you tried to run 'zpool import -d /dev/disk/by-id' to see what zfs sees ?
  4. N

    Ceph - feasible for Clustered MSSQL?

    You have to ask what Ceph will solve for you. Scalable, failure .... If you decide to use Ceph in my short experience and other people suggestions: 1. Get fast network as possible. Network latency have big role. 2. Get SSD/NVME enterprise grade to survive the load. 3. More CPU cores not always...
  5. N

    [SOLVED] Help! Ceph access totally broken

    What you need to do is to create missing config files. To get all keys use this: # ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pvecloud01/keyring auth ls 1. Create /etc/pve/ceph.conf and put needed info 2. Link the file: # ln -s /etc/ceph/ceph.conf /etc/pve/ceph.conf 3. Restore keyrings...
  6. N

    [SOLVED] Help! Ceph access totally broken

    Does it give good result? # ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pvecloud01/keyring -s
  7. N

    [SOLVED] Help! Ceph access totally broken

    Go to store.db You will find .sst file. Copy it to your PC. Open with Notepad++ or other editor and search for "key =" It should be your admin key
  8. N

    [SOLVED] Help! Ceph access totally broken

    Is your /var/lib/ceph/mon/* empty ?
  9. N

    [SOLVED] Help! Ceph access totally broken

    If your OSD is still running / or not umounted then you can get Ceph cluster ID from /var/lib/ceph/osd/ceph-X/ceph_fsid
  10. N

    Proxmox in fault-tolerant

    No. But you can create your own tool to check does server is running and send startup command to proxmox if not.
  11. N

    Ceph OSD using wrong device identifier

    If you want to remove temporary disk you have to: 1. shutdown OSD 2. unmount all related mount points ( like /var/lib/ceph/osd/osd-X ) 3. release who is holding sdc ( encryption / LVM ) 4. unplug the disk This way disk could get the same name as before and LVM scan could import it and...
  12. N

    Ceph OSD using wrong device identifier

    1. Do you use LUKS ? 2. What dmesg reports in this situation? Does the drive get the same name sdX or another sdY ? 3. What you see for LVM reports in dmesg ?
  13. N

    ZFS data slowly getting corrupt

    or you have to use 'zfs set copies=2 pool' or more.
  14. N

    ZFS data slowly getting corrupt

    Lexxar zfs pool is maded from how many disks ?
  15. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use 8 disks in 2 groups using raidz2 it will be something like this: zfs_pool raidz2-0 disk-1 disk-2 disk-3 disk-4 raidz2-1 disk-5 disk-6 disk-7 disk-8 In each raidz2 group 2 disks can die. In very...