Here the output of ceph balancer status (works again after health check is green).
root@ld3955:~# ceph balancer status
{
"active": true,
"plans": [],
"mode": "upmap"
}
Hi,
please focus on the issue slow requests are blocked in this ticket only.
The other issues are a) under control (+35GB free disk space on monitor nodes) or b) addressed in ceph-user-list.
Unfortunately ceph balancer status is not responding... could be related to slow requests are blocked.
Indeed I activated the balancer mode "upmap" when Ceph health status was green, means there was no relevant activity.
Can you please advise which logs should be checked?
Based on the output of ceph health detail I can see which OSDs are affected, and to my best knowledge the OSDs are always...
Hi Alwin,
after some time the number of slow requests are blocked decreased, but very slowly.
In my opinion there's a correlation between the number of PGs inactive and the number of slow requests are blocked.
I must understand what is causing the slow requests are blocked on the pools that...
Hi,
I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g.
2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0
2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
Hm...
the manpage of pveceph shows
--cleanup <boolean> (default = 0)
If set, we remove partition table entries.
My understanding of this is not that LVM, means volume group and logical volume, will be removed.
Can you confirm that this option will remove volume group and logical...
Hi,
to remove an OSD I run this command:
root@ld5506:/var/lib/vz# pveceph osd destroy 2
destroy OSD osd.2
Remove osd.2 from the CRUSH map
Remove the osd.2 authentication key.
Remove OSD osd.2
--> Zapping...
Hi,
my VMs use an EFI disk (in addition to the standard disk).
Now I want to move all disks to another storage.
There's no issue with the standard disk.
However, the option to move the EFI disk is not available in WebUI.
This means, I need drop the EFI disk and re-create it in new storage.
But...
Hi,
thanks for your reply.
I have been pointed many time now to the backfill_toofull status as a root cause for the issue with slow MDS.
However I'm not sure if this explanation still applies if you consider this:
All OSDs that are affected by backfill_toofull reside on dedicated drives, and...
Update:
I think that this issue is related to other issues reported here and here.
Furthermore I found out that I cannot copy data from the affected pool to local disk.
I started copying a LXC dump file and this hangs after transferring
Source
root@ld3955:~# ls -l /mnt/pve/pve_cephfs/dump/...
Hi,
here I describe 1 of the 2 major issues I'm currently facing in my 8 node ceph cluster (2x MDS, 6x ODS).
The issue is that I cannot start any virtual machine KVM or container LXC; the boot process just hangs after a few seconds.
All these KVMs and LXCs have in common that their virtual...
Hi,
I was getting this error in syslog:
nf_conntrack: nf_conntrack: table full, dropping packet
To solve this issue I found this:
CONNTRACK_MAX = RAMSIZE (in bytes) / 16384 / (ARCH / 32)
Having Mellanox NIC installed on my server I followed the recommendation to improve performance.
This...
Hello!
On any node belonging to PVE cluster I can see much more local storage utilization compared to what is stored on disk.
The screenshot attached here is from my MGR node that shows utilization of 62.91%, but /var/lib/vz is empty!
root@ld3955:~# ls -lR /var/lib/vz/
/var/lib/vz/:
insgesamt...
Hi,
after rebooting 1 node serving MDS I get this error message in this node's syslog:
root@ld3955:~# tail /var/log/syslog
Sep 17 12:21:18 ld3955 kernel: [ 3141.167834] ceph: probably no mds server is up
Sep 17 12:21:18 ld3955 pvestatd[2482]: mount error: exit code 2
Sep 17 12:21:28 ld3955...
OK.
I created a new OSD (from scratch), but there's no relevant entry in crush map except for device 8 osd.8 class hdd in section "devices.
root@ld5505:~# pveceph osd create /dev/sdbm --db_dev /dev/sdbk --db_size 10
create OSD on /dev/sdbm (bluestore)
creating block.db on '/dev/sdbk'
Physical...
The file /var/lib/ceph/mds/ceph-<ID>/keyring already exists.
Therefore I simply modified the config in /etc/ceph/ceph.conf and now MDS starts w/o errors.
Hi,
after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size.
Example:
ceph osd crush set osd.<id> <weight> root=default host=<hostname>
Question:
How is the weight defined depending on disk size?
Which algorithm can be...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.