Search results

  1. N

    Ceph - feasible for Clustered MSSQL?

    You have to ask what Ceph will solve for you. Scalable, failure .... If you decide to use Ceph in my short experience and other people suggestions: 1. Get fast network as possible. Network latency have big role. 2. Get SSD/NVME enterprise grade to survive the load. 3. More CPU cores not always...
  2. N

    [SOLVED] Help! Ceph access totally broken

    What you need to do is to create missing config files. To get all keys use this: # ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pvecloud01/keyring auth ls 1. Create /etc/pve/ceph.conf and put needed info 2. Link the file: # ln -s /etc/ceph/ceph.conf /etc/pve/ceph.conf 3. Restore keyrings...
  3. N

    [SOLVED] Help! Ceph access totally broken

    Does it give good result? # ceph -n mon. --keyring /var/lib/ceph/mon/ceph-pvecloud01/keyring -s
  4. N

    [SOLVED] Help! Ceph access totally broken

    Go to store.db You will find .sst file. Copy it to your PC. Open with Notepad++ or other editor and search for "key =" It should be your admin key
  5. N

    [SOLVED] Help! Ceph access totally broken

    Is your /var/lib/ceph/mon/* empty ?
  6. N

    [SOLVED] Help! Ceph access totally broken

    If your OSD is still running / or not umounted then you can get Ceph cluster ID from /var/lib/ceph/osd/ceph-X/ceph_fsid
  7. N

    Proxmox in fault-tolerant

    No. But you can create your own tool to check does server is running and send startup command to proxmox if not.
  8. N

    Ceph OSD using wrong device identifier

    If you want to remove temporary disk you have to: 1. shutdown OSD 2. unmount all related mount points ( like /var/lib/ceph/osd/osd-X ) 3. release who is holding sdc ( encryption / LVM ) 4. unplug the disk This way disk could get the same name as before and LVM scan could import it and...
  9. N

    Ceph OSD using wrong device identifier

    1. Do you use LUKS ? 2. What dmesg reports in this situation? Does the drive get the same name sdX or another sdY ? 3. What you see for LVM reports in dmesg ?
  10. N

    ZFS data slowly getting corrupt

    or you have to use 'zfs set copies=2 pool' or more.
  11. N

    ZFS data slowly getting corrupt

    Lexxar zfs pool is maded from how many disks ?
  12. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use 8 disks in 2 groups using raidz2 it will be something like this: zfs_pool raidz2-0 disk-1 disk-2 disk-3 disk-4 raidz2-1 disk-5 disk-6 disk-7 disk-8 In each raidz2 group 2 disks can die. In very...
  13. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use as regular RAID10, example raidz2 + raidz2 you have to count IOPS as first raidz2 group IOS + second raidz2 group.
  14. N

    zfs for storage and 4k rand write/read values are very low

    Keep in mind your data in raidzX will be split and multiplied. For example in raidz2 with 6 drives (2 parity) IOPS will count as 4 x slowest IOPS. But keep in mind ZFS is COW system. It doesn't have random write from software perspective (fio, SQL ....).
  15. N

    [SOLVED] PVE host and PVE guest on ZFS, rpool fails to import

    Rename pool name on host or VM. Adjust boot settings.
  16. N

    zfs for storage and 4k rand write/read values are very low

    Hi. This is my 2 cents 1. Simples file systems don't require additional work to do. 2. SLOG can help only for sync writes and can help reduce depreciation of your primary nvme (without SLOG and sync=standard -> Double write to the same disk) but for performance I don't see any improvement. 3...
  17. N

    Persistent ZFS Pool Errors and Data Corruption Issues – Assistance Needed

    Look what software is using those files. iocage - FreeBSD jail. Use this tool to create it again.
  18. N

    Persistent ZFS Pool Errors and Data Corruption Issues – Assistance Needed

    Try # zpool clear DISK Those <0x0> may indicate the problem of the past. Like if you delete corrupted file zpool still keeps its information. Some times all catalog could be corrupted. .system/services and iocage - can you recreate it ?
  19. N

    Persistent ZFS Pool Errors and Data Corruption Issues – Assistance Needed

    If it possible 1. run ZFS pool in read only 2. Copy what you can. 3. To copy corrupted files (if needed) try zfs_send_corrupt_data Problem: Mostly hardware can you print? # zpool status -x -v
  20. N

    Unexpected Ceph Outage - What did I do wrong?

    Hi LordDongus I tested this scenario and I can give you some details. What's happening after removing hdd 'accidently' * OSD process will not notice it if no active IO * Layers LVM and LUKS will still stand as it is After removing disk active OSD will leave problem messages and crash...