Hallo,
ich möchte eine osd wieder aktivieren.
Dabei wird folgender Fehler ausgegeben und ich komme nicht weiter:
Wie umgehe ich den Fehler "read_fsid unparsable uuid"?
Vielen Dank für eure Hilfe.
Solved with:
In der ceph.conf die Optionen bluestore_cache_* entfernt.
Remove option in ceph.conf bluestore_cache_*.
ich möchte eine osd wieder aktivieren.
Versionen:
ceph 16.2.6
Proxmox Virtual Environment 7.0-11
blkid /dev/nvme7n1
/dev/nvme7n1: UUID="4wv0Jx-J7R1-qMEq-tBCD-8OD9-eebw-aBj0Ce" TYPE="LVM2_member"
NVME ist eine Intel DC4500 4TB Modell SSDPE2KX040T7
Dabei wird folgender Fehler ausgegeben und ich komme nicht weiter:
create OSD on /dev/nvme7n1 (bluestore)
wipe disk/partition: /dev/nvme7n1
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.327148 s, 641 MB/s
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 80b896c3-37da-4a80-8909-e22ea7864774
Running command: /sbin/vgcreate --force --yes ceph-51a7977f-6fbd-4f40-8b40-bc1749a05b43 /dev/nvme7n1
stdout: Physical volume "/dev/nvme7n1" successfully created.
stdout: Volume group "ceph-51a7977f-6fbd-4f40-8b40-bc1749a05b43" successfully created
Running command: /sbin/lvcreate --yes -l 953861 -n osd-block-80b896c3-37da-4a80-8909-e22ea7864774 ceph-51a7977f-6fbd-4f40-8b40-bc1749a05b43
stdout: Logical volume "osd-block-80b896c3-37da-4a80-8909-e22ea7864774" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-23
--> Executable selinuxenabled not in PATH: /sbin:/bin:/usr/sbin:/usr/bin
Running command: /bin/chown -h ceph:ceph /dev/ceph-51a7977f-6fbd-4f40-8b40-bc1749a05b43/osd-block-80b896c3-37da-4a80-8909-e22ea7864774
Running command: /bin/chown -R ceph:ceph /dev/dm-10
Running command: /bin/ln -s /dev/ceph-51a7977f-6fbd-4f40-8b40-bc1749a05b43/osd-block-80b896c3-37da-4a80-8909-e22ea7864774 /var/lib/ceph/osd/ceph-23/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-23/activate.monmap
stderr: got monmap epoch 10
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-23/keyring --create-keyring --name osd.23 --add-key xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
stdout: creating /var/lib/ceph/osd/ceph-23/keyring
added entity osd.23 auth(key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxx==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-23/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-23/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 23 --monmap /var/lib/ceph/osd/ceph-23/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-23/ --osd-uuid 80b896c3-37da-4a80-8909-e22ea7864774 --setuser ceph --setgroup ceph
stderr: 2021-09-28T14:34:37.540+0200 7fbb09be9f00 -1 bluestore(/var/lib/ceph/osd/ceph-23/) _read_fsid unparsable uuid
stderr: ./src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7fbadcb24700 time 2021-09-28T14:34:38.403468+0200
stderr: ./src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0)
stderr: ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)
stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x124) [0x5646d9dbc92e]
stderr: 2: /usr/bin/ceph-osd(+0xabaab9) [0x5646d9dbcab9]
stderr: 3: (PriorityCache::Manager::balance()+0x478) [0x5646da9595f8]
stderr: 4: (BlueStore::MempoolThread::entry()+0xa29) [0x5646da3e9cd9]
stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7fbb0a235ea7]
stderr: 6: clone()
stderr: 2021-09-28T14:34:38.404+0200 7fbadcb24700 -1 ./src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7fbadcb24700 time 2021-09-28T14:34:38.403468+0200
stderr: ./src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0)
stderr: ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)
stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x124) [0x5646d9dbc92e]
stderr: 2: /usr/bin/ceph-osd(+0xabaab9) [0x5646d9dbcab9]
stderr: 3: (PriorityCache::Manager::balance()+0x478) [0x5646da9595f8]
stderr: 4: (BlueStore::MempoolThread::entry()+0xa29) [0x5646da3e9cd9]
stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7fbb0a235ea7]
stderr: 6: clone()
stderr: *** Caught signal (Aborted) **
stderr: in thread 7fbadcb24700 thread_name:bstore_mempool
stderr: ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)
stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7fbb0a241140]
stderr: 2: gsignal()
stderr: 3: abort()
stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x16e) [0x5646d9dbc978]
stderr: 5: /usr/bin/ceph-osd(+0xabaab9) [0x5646d9dbcab9]
stderr: 6: (PriorityCache::Manager::balance()+0x478) [0x5646da9595f8]
stderr: 7: (BlueStore::MempoolThread::entry()+0xa29) [0x5646da3e9cd9]
stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7fbb0a235ea7]
stderr: 9: clone()
stderr: 2021-09-28T14:34:38.404+0200 7fbadcb24700 -1 *** Caught signal (Aborted) **
stderr: in thread 7fbadcb24700 thread_name:bstore_mempool
stderr: ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)
stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7fbb0a241140]
stderr: 2: gsignal()
stderr: 3: abort()
stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x16e) [0x5646d9dbc978]
stderr: 5: /usr/bin/ceph-osd(+0xabaab9) [0x5646d9dbcab9]
stderr: 6: (PriorityCache::Manager::balance()+0x478) [0x5646da9595f8]
stderr: 7: (BlueStore::MempoolThread::entry()+0xa29) [0x5646da3e9cd9]
stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7fbb0a235ea7]
stderr: 9: clone()
stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
stderr: -2777> 2021-09-28T14:34:37.540+0200 7fbb09be9f00 -1 bluestore(/var/lib/ceph/osd/ceph-23/) _read_fsid unparsable uuid
stderr: -1> 2021-09-28T14:34:38.404+0200 7fbadcb24700 -1 ./src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7fbadcb24700 time 2021-09-28T14:34:38.403468+0200
stderr: ./src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0)
stderr: ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)
stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x124) [0x5646d9dbc92e]
stderr: 2: /usr/bin/ceph-osd(+0xabaab9) [0x5646d9dbcab9]
stderr: 3: (PriorityCache::Manager::balance()+0x478) [0x5646da9595f8]
stderr: 4: (BlueStore::MempoolThread::entry()+0xa29) [0x5646da3e9cd9]
stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7fbb0a235ea7]
stderr: 6: clone()
stderr: 0> 2021-09-28T14:34:38.404+0200 7fbadcb24700 -1 *** Caught signal (Aborted) **
stderr: in thread 7fbadcb24700 thread_name:bstore_mempool
stderr: ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)
stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7fbb0a241140]
stderr: 2: gsignal()
stderr: 3: abort()
stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x16e) [0x5646d9dbc978]
stderr: 5: /usr/bin/ceph-osd(+0xabaab9) [0x5646d9dbcab9]
stderr: 6: (PriorityCache::Manager::balance()+0x478) [0x5646da9595f8]
stderr: 7: (BlueStore::MempoolThread::entry()+0xa29) [0x5646da3e9cd9]
stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7fbb0a235ea7]
stderr: 9: clone()
stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
stderr: *** Caught signal (Segmentation fault) **
stderr: in thread 7fbadcb24700 thread_name:bstore_mempool
stderr: ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)
stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7fbb0a241140]
stderr: 2: pthread_getname_np()
stderr: 3: (ceph::logging::Log::dump_recent()+0x480) [0x5646da6fbd00]
stderr: 4: /usr/bin/ceph-osd(+0x11ed0e0) [0x5646da4ef0e0]
stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7fbb0a241140]
stderr: 6: gsignal()
stderr: 7: abort()
stderr: 8: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x16e) [0x5646d9dbc978]
stderr: 9: /usr/bin/ceph-osd(+0xabaab9) [0x5646d9dbcab9]
stderr: 10: (PriorityCache::Manager::balance()+0x478) [0x5646da9595f8]
stderr: 11: (BlueStore::MempoolThread::entry()+0xa29) [0x5646da3e9cd9]
stderr: 12: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7fbb0a235ea7]
stderr: 13: clone()
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.23 --yes-i-really-mean-it
stderr: purged osd.23
--> RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 23 --monmap /var/lib/ceph/osd/ceph-23/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-23/ --osd-uuid 80b896c3-37da-4a80-8909-e22ea7864774 --setuser ceph --setgroup ceph
TASK ERROR: command 'ceph-volume lvm create --cluster-fsid 75f1cff9-0580-45a4-934b-40ae9ff832b7 --data /dev/nvme7n1' failed: exit code 1
Wie umgehe ich den Fehler "read_fsid unparsable uuid"?
Vielen Dank für eure Hilfe.
Solved with:
In der ceph.conf die Optionen bluestore_cache_* entfernt.
Remove option in ceph.conf bluestore_cache_*.
Last edited: