Recent content by Nemesiz

  1. N

    [SOLVED] Help! Ceph access totally broken

    What you need to do is to create missing config files. To get all keys use this: # ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pvecloud01/keyring auth ls 1. Create /etc/pve/ceph.conf and put needed info 2. Link the file: # ln -s /etc/ceph/ceph.conf /etc/pve/ceph.conf 3. Restore keyrings...
  2. N

    [SOLVED] Help! Ceph access totally broken

    Does it give good result? # ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pvecloud01/keyring -s
  3. N

    [SOLVED] Help! Ceph access totally broken

    Go to store.db You will find .sst file. Copy it to your PC. Open with Notepad++ or other editor and search for "key =" It should be your admin key
  4. N

    [SOLVED] Help! Ceph access totally broken

    Is your /var/lib/ceph/mon/* empty ?
  5. N

    [SOLVED] Help! Ceph access totally broken

    If your OSD is still running / or not umounted then you can get Ceph cluster ID from /var/lib/ceph/osd/ceph-X/ceph_fsid
  6. N

    Proxmox in fault-tolerant

    No. But you can create your own tool to check does server is running and send startup command to proxmox if not.
  7. N

    Ceph OSD using wrong device identifier

    If you want to remove temporary disk you have to: 1. shutdown OSD 2. unmount all related mount points ( like /var/lib/ceph/osd/osd-X ) 3. release who is holding sdc ( encryption / LVM ) 4. unplug the disk This way disk could get the same name as before and LVM scan could import it and...
  8. N

    Ceph OSD using wrong device identifier

    1. Do you use LUKS ? 2. What dmesg reports in this situation? Does the drive get the same name sdX or another sdY ? 3. What you see for LVM reports in dmesg ?
  9. N

    ZFS data slowly getting corrupt

    or you have to use 'zfs set copies=2 pool' or more.
  10. N

    ZFS data slowly getting corrupt

    Lexxar zfs pool is maded from how many disks ?
  11. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use 8 disks in 2 groups using raidz2 it will be something like this: zfs_pool raidz2-0 disk-1 disk-2 disk-3 disk-4 raidz2-1 disk-5 disk-6 disk-7 disk-8 In each raidz2 group 2 disks can die. In very...
  12. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use as regular RAID10, example raidz2 + raidz2 you have to count IOPS as first raidz2 group IOS + second raidz2 group.
  13. N

    zfs for storage and 4k rand write/read values are very low

    Keep in mind your data in raidzX will be split and multiplied. For example in raidz2 with 6 drives (2 parity) IOPS will count as 4 x slowest IOPS. But keep in mind ZFS is COW system. It doesn't have random write from software perspective (fio, SQL ....).
  14. N

    [SOLVED] PVE host and PVE guest on ZFS, rpool fails to import

    Rename pool name on host or VM. Adjust boot settings.
  15. N

    zfs for storage and 4k rand write/read values are very low

    Hi. This is my 2 cents 1. Simples file systems don't require additional work to do. 2. SLOG can help only for sync writes and can help reduce depreciation of your primary nvme (without SLOG and sync=standard -> Double write to the same disk) but for performance I don't see any improvement. 3...