If you want to remove temporary disk you have to:
1. shutdown OSD
2. unmount all related mount points ( like /var/lib/ceph/osd/osd-X )
3. release who is holding sdc ( encryption / LVM )
4. unplug the disk
This way disk could get the same name as before and LVM scan could import it and...
1. Do you use LUKS ?
2. What dmesg reports in this situation? Does the drive get the same name sdX or another sdY ?
3. What you see for LVM reports in dmesg ?
If you want to use 8 disks in 2 groups using raidz2 it will be something like this:
zfs_pool
raidz2-0
disk-1
disk-2
disk-3
disk-4
raidz2-1
disk-5
disk-6
disk-7
disk-8
In each raidz2 group 2 disks can die. In very...
Keep in mind your data in raidzX will be split and multiplied. For example in raidz2 with 6 drives (2 parity) IOPS will count as 4 x slowest IOPS.
But keep in mind ZFS is COW system. It doesn't have random write from software perspective (fio, SQL ....).
Hi. This is my 2 cents
1. Simples file systems don't require additional work to do.
2. SLOG can help only for sync writes and can help reduce depreciation of your primary nvme (without SLOG and sync=standard -> Double write to the same disk) but for performance I don't see any improvement.
3...
Try # zpool clear DISK
Those <0x0> may indicate the problem of the past. Like if you delete corrupted file zpool still keeps its information.
Some times all catalog could be corrupted.
.system/services and iocage - can you recreate it ?
If it possible
1. run ZFS pool in read only
2. Copy what you can.
3. To copy corrupted files (if needed) try zfs_send_corrupt_data
Problem: Mostly hardware
can you print? # zpool status -x -v
Hi LordDongus
I tested this scenario and I can give you some details.
What's happening after removing hdd 'accidently'
* OSD process will not notice it if no active IO
* Layers LVM and LUKS will still stand as it is
After removing disk active OSD will leave problem messages and crash...
Squid have problems with orchestrator and dashboard functionality. As one created issue https://tracker.ceph.com/issues/68657
In my test lab I had the same thing. I`m just saying for just in case.
I log in using root
This is part of crush:
# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class ssd
device 7 osd.7 class ssd-web
device 8 osd.8 class ssd-web
device 9...
Hello,
In Ceph -> OSD section I can`t control OSD. All buttons ( Details, Start .... Out, In, More ) are inactive after I select OSD.
Who could cause the problem?
I use few 'roots' in my Ceph test system and same OSD exist in few buckets/branch
Any other problem?
Expanding everything I`m...
Have you tried to look at dmesg / smartctl ?
Try zpool clear <pool_name>
This will make zfs pool to do resilvering and it may be enough for fix. Later investigate the status of the disk.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.