Hi, I use a grafana multicluster long time ago and after upgrade I see an empty answer to the queries. Any idea how to debug this?
root@int101:~# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
mon advanced...
Thanks Aarom, I'm recreating them, but it takes about 2 or 3 hours each and cannot degrade service performance too much.
It seems also nobody else it's in the same situation, so digging may not woth it.
I'll let you know if it issue persists once osd's been updated.
Regards
Yes, it does, the ones with the new layout are owned by root:disk but also old layout ones, thery have mixed permissions:
root@int101:~# ls -l /dev/sde*
brw-rw---- 1 root disk 8, 64 Aug 4 11:17 /dev/sde
brw-rw---- 1 root disk 8, 65 Aug 4 11:17 /dev/sde1
brw-rw---- 1 ceph ceph 8, 66 Aug 4...
@aaron I have other osds, maybe the recreated ones only with one partition:
root@int101:~# ls -l /dev/sda*
brw-rw---- 1 root disk 8, 0 Aug 1 07:54 /dev/sda
Regards
@aaron this is the output form osd 22
root@int101:~# ls -l /dev/sde*
brw-rw---- 1 root disk 8, 64 Aug 1 07:54 /dev/sde
brw-rw---- 1 root disk 8, 65 Aug 1 07:54 /dev/sde1
brw-rw---- 1 ceph ceph 8, 66 Aug 2 23:51 /dev/sde2
Regards
Hello, I have some problems after update, osds sometime came back down and out:
2021-08-01T07:55:38.718+0200 7f35a7603f00 -1 bluestore(/var/lib/ceph/osd/ceph-22/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-22/block: (13) Permission denied
2021-08-01T07:55:38.718+0200...
More info on this, sometimes I'm able to readd failing osd's a bit later without changing anything at all:
2021-08-01T07:55:38.718+0200 7f35a7603f00 -1 bluestore(/var/lib/ceph/osd/ceph-22/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-22/block: (13) Permission denied...
Also, have one or more osds down on node reboots:
2021-08-01T07:55:38.476+0200 7f22b9b69f00 -1 bluestore(/var/lib/ceph/osd/ceph-23/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-23/block: (13) Permission denied
2021-08-01T07:55:38.476+0200 7f22b9b69f00 1 bdev(0x55f022ef8400...
Hello, we updated only one of our clusters to the new version last weekend. Yesterday custer rebooted two times and 3 time today. Every time it's back one ore more OSDs are down, different each time, if it's destroyed and recreated again it seems there it's no problem with that OSD till now...
Hello, we see these errors since 7 version update in every ceph node:
72619:Jul 20 08:27:14 int101 systemd-udevd[4773]: sdb2: Failed to update device symlinks: Too many levels of symbolic links
72620:Jul 20 08:27:14 int101 systemd-udevd[4822]: sdd2: Failed to update device symlinks: Too many...
Thanks Stoiko, I though vzdump.conf only affected to vzdump, not to pbs. I'd preffer do not limit backup restore process, we will end changing manually with every restore process, but I guess I can give it a try just to test it
The origin of the problem to my understanding it's the hole VM disk...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.