Search results

  1. N

    Ceph OSD using wrong device identifier

    If you want to remove temporary disk you have to: 1. shutdown OSD 2. unmount all related mount points ( like /var/lib/ceph/osd/osd-X ) 3. release who is holding sdc ( encryption / LVM ) 4. unplug the disk This way disk could get the same name as before and LVM scan could import it and...
  2. N

    Ceph OSD using wrong device identifier

    1. Do you use LUKS ? 2. What dmesg reports in this situation? Does the drive get the same name sdX or another sdY ? 3. What you see for LVM reports in dmesg ?
  3. N

    ZFS data slowly getting corrupt

    or you have to use 'zfs set copies=2 pool' or more.
  4. N

    ZFS data slowly getting corrupt

    Lexxar zfs pool is maded from how many disks ?
  5. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use 8 disks in 2 groups using raidz2 it will be something like this: zfs_pool raidz2-0 disk-1 disk-2 disk-3 disk-4 raidz2-1 disk-5 disk-6 disk-7 disk-8 In each raidz2 group 2 disks can die. In very...
  6. N

    zfs for storage and 4k rand write/read values are very low

    If you want to use as regular RAID10, example raidz2 + raidz2 you have to count IOPS as first raidz2 group IOS + second raidz2 group.
  7. N

    zfs for storage and 4k rand write/read values are very low

    Keep in mind your data in raidzX will be split and multiplied. For example in raidz2 with 6 drives (2 parity) IOPS will count as 4 x slowest IOPS. But keep in mind ZFS is COW system. It doesn't have random write from software perspective (fio, SQL ....).
  8. N

    [SOLVED] PVE host and PVE guest on ZFS, rpool fails to import

    Rename pool name on host or VM. Adjust boot settings.
  9. N

    zfs for storage and 4k rand write/read values are very low

    Hi. This is my 2 cents 1. Simples file systems don't require additional work to do. 2. SLOG can help only for sync writes and can help reduce depreciation of your primary nvme (without SLOG and sync=standard -> Double write to the same disk) but for performance I don't see any improvement. 3...
  10. N

    Persistent ZFS Pool Errors and Data Corruption Issues – Assistance Needed

    Look what software is using those files. iocage - FreeBSD jail. Use this tool to create it again.
  11. N

    Persistent ZFS Pool Errors and Data Corruption Issues – Assistance Needed

    Try # zpool clear DISK Those <0x0> may indicate the problem of the past. Like if you delete corrupted file zpool still keeps its information. Some times all catalog could be corrupted. .system/services and iocage - can you recreate it ?
  12. N

    Persistent ZFS Pool Errors and Data Corruption Issues – Assistance Needed

    If it possible 1. run ZFS pool in read only 2. Copy what you can. 3. To copy corrupted files (if needed) try zfs_send_corrupt_data Problem: Mostly hardware can you print? # zpool status -x -v
  13. N

    Unexpected Ceph Outage - What did I do wrong?

    Hi LordDongus I tested this scenario and I can give you some details. What's happening after removing hdd 'accidently' * OSD process will not notice it if no active IO * Layers LVM and LUKS will still stand as it is After removing disk active OSD will leave problem messages and crash...
  14. N

    Ceph 19.2 Squid Available as Technology Preview and Ceph 17.2 Quincy soon to be EOL

    Squid have problems with orchestrator and dashboard functionality. As one created issue https://tracker.ceph.com/issues/68657 In my test lab I had the same thing. I`m just saying for just in case.
  15. N

    [bug] [WebUI] Ceph OSD control and multiple roots

    I log in using root This is part of crush: # devices device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class hdd device 3 osd.3 class hdd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class ssd device 7 osd.7 class ssd-web device 8 osd.8 class ssd-web device 9...
  16. N

    [bug] [WebUI] Ceph OSD control and multiple roots

    Hello, In Ceph -> OSD section I can`t control OSD. All buttons ( Details, Start .... Out, In, More ) are inactive after I select OSD. Who could cause the problem? I use few 'roots' in my Ceph test system and same OSD exist in few buckets/branch Any other problem? Expanding everything I`m...
  17. N

    HA storage for web cluster on PVE/Ceph

    Old topic but still active situation of today. @hansm have you solve this?
  18. N

    How to replace HDD in ZFS raidz2

    Have you tried to look at dmesg / smartctl ? Try zpool clear <pool_name> This will make zfs pool to do resilvering and it may be enough for fix. Later investigate the status of the disk.
  19. N

    Proxmox clone or transfer to new disk from 4k to 512 Sectors

    You cant transform ZFS pool from ashift 12 to 9. Just create need ZFS pool and using snapshot send data to new place.
  20. N

    EC pool creation, Help Needed

    I believe you need 6 host

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!