Recent content by Nemesiz

  1. N

    ZFS pool won’t import after switching from /dev/sdx to /dev/disk/by-id – mixed vdev paths

    Hi, have you tried to run 'zpool import -d /dev/disk/by-id' to see what zfs sees ?
  2. N

    Ceph - feasible for Clustered MSSQL?

    You have to ask what Ceph will solve for you. Scalable, failure .... If you decide to use Ceph in my short experience and other people suggestions: 1. Get fast network as possible. Network latency have big role. 2. Get SSD/NVME enterprise grade to survive the load. 3. More CPU cores not always...
  3. N

    [SOLVED] Help! Ceph access totally broken

    What you need to do is to create missing config files. To get all keys use this: # ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pvecloud01/keyring auth ls 1. Create /etc/pve/ceph.conf and put needed info 2. Link the file: # ln -s /etc/ceph/ceph.conf /etc/pve/ceph.conf 3. Restore keyrings...
  4. N

    [SOLVED] Help! Ceph access totally broken

    Does it give good result? # ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pvecloud01/keyring -s
  5. N

    [SOLVED] Help! Ceph access totally broken

    Go to store.db You will find .sst file. Copy it to your PC. Open with Notepad++ or other editor and search for "key =" It should be your admin key
  6. N

    [SOLVED] Help! Ceph access totally broken

    Is your /var/lib/ceph/mon/* empty ?
  7. N

    [SOLVED] Help! Ceph access totally broken

    If your OSD is still running / or not umounted then you can get Ceph cluster ID from /var/lib/ceph/osd/ceph-X/ceph_fsid
  8. N

    Proxmox in fault-tolerant

    No. But you can create your own tool to check does server is running and send startup command to proxmox if not.
  9. N

    Ceph OSD using wrong device identifier

    If you want to remove temporary disk you have to: 1. shutdown OSD 2. unmount all related mount points ( like /var/lib/ceph/osd/osd-X ) 3. release who is holding sdc ( encryption / LVM ) 4. unplug the disk This way disk could get the same name as before and LVM scan could import it and...
  10. N

    Ceph OSD using wrong device identifier

    1. Do you use LUKS ? 2. What dmesg reports in this situation? Does the drive get the same name sdX or another sdY ? 3. What you see for LVM reports in dmesg ?
  11. N

    ZFS data slowly getting corrupt

    or you have to use 'zfs set copies=2 pool' or more.
  12. N

    ZFS data slowly getting corrupt

    Lexxar zfs pool is maded from how many disks ?
  13. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use 8 disks in 2 groups using raidz2 it will be something like this: zfs_pool raidz2-0 disk-1 disk-2 disk-3 disk-4 raidz2-1 disk-5 disk-6 disk-7 disk-8 In each raidz2 group 2 disks can die. In very...
  14. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use as regular RAID10, example raidz2 + raidz2 you have to count IOPS as first raidz2 group IOS + second raidz2 group.
  15. N

    zfs for storage and 4k rand write/read values are very low

    Keep in mind your data in raidzX will be split and multiplied. For example in raidz2 with 6 drives (2 parity) IOPS will count as 4 x slowest IOPS. But keep in mind ZFS is COW system. It doesn't have random write from software perspective (fio, SQL ....).