Ceph err causes VM Down during VM Clone

SCM

Member
Aug 9, 2019
43
2
13
20
start ceph status is HEALTH_OK,Clone during VM operation, ceph one osd auto down & up, last osd err : bad crc in data 1000574999 != exp 0 , VM auto Stop.

Check DISKs smart is passed. What causes CEPH error and osd down?


PVE VM Clone Job Log:
Code:
create full clone of drive scsi0 (cca-sto1:vm-100-disk-0)
drive mirror is starting for drive-scsi0
drive-scsi0: transferred: 0 bytes remaining: 34359738368 bytes total: 34359738368 bytes progression: 0.00 % busy: 1 ready: 0
drive-scsi0: transferred: 58720256 bytes remaining: 34301018112 bytes total: 34359738368 bytes progression: 0.17 % busy: 1 ready: 0
drive-scsi0: transferred: 126877696 bytes remaining: 34232860672 bytes total: 34359738368 bytes progression: 0.37 % busy: 1 ready: 0

.......

drive-scsi0: transferred: 19395510272 bytes remaining: 14973534208 bytes total: 34369044480 bytes progression: 56.43 % busy: 1 ready: 0
drive-scsi0: transferred: 19439550464 bytes remaining: 14929494016 bytes total: 34369044480 bytes progression: 56.56 % busy: 1 ready: 0
drive-scsi0: transferred: 19498270720 bytes remaining: 14870773760 bytes total: 34369044480 bytes progression: 56.73 % busy: 1 ready: 0
drive-scsi0: Cancelling block job
drive-scsi0: Cancelling block job
2019-08-09 09:45:49.672360 7f45a77fe700 -1 librbd::image::RemoveRequest: 0x56209139f1d0 handle_exclusive_lock: cannot obtain exclusive lock - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
error with cfs lock 'storage-cca-sto1': rbd rm 'vm-101-disk-0' error: rbd: error: image still has watchers
TASK ERROR: clone failed: mirroring error: VM 100 not running

OSD Log:
Code:
2019-08-09 09:29:28.325156 7f879c2df700  0 log_channel(cluster) log [DBG] : 1.6b deep-scrub starts
2019-08-09 09:29:29.041061 7f87982d7700  0 log_channel(cluster) log [DBG] : 1.6b deep-scrub ok
2019-08-09 09:41:22.783095 7f8318482e00  0 set uid:gid to 64045:64045 (ceph:ceph)
2019-08-09 09:41:22.783108 7f8318482e00  0 ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable), process ceph-osd, pid 3165815
2019-08-09 09:41:22.787377 7f8318482e00  0 pidfile_write: ignore empty --pid-file
2019-08-09 09:41:22.792788 7f8318482e00  0 load: jerasure load: lrc load: isa
2019-08-09 09:41:22.792850 7f8318482e00  1 bdev create path /var/lib/ceph/osd/ceph-0/block type kernel
2019-08-09 09:41:22.792857 7f8318482e00  1 bdev(0x5598b10c66c0 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
2019-08-09 09:41:22.793022 7f8318482e00  1 bdev(0x5598b10c66c0 /var/lib/ceph/osd/ceph-0/block) open size 479998054400 (0x6fc21d1000, 447GiB) block_size 4096 (4KiB) non-rotational
2019-08-09 09:41:22.793278 7f8318482e00  1 bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 3221225472 meta 0.4 kv 0.4 data 0.2
2019-08-09 09:41:22.793292 7f8318482e00  1 bdev(0x5598b10c66c0 /var/lib/ceph/osd/ceph-0/block) close
2019-08-09 09:41:23.086007 7f8318482e00  1 bluestore(/var/lib/ceph/osd/ceph-0) _mount path /var/lib/ceph/osd/ceph-0
2019-08-09 09:41:23.086265 7f8318482e00  1 bdev create path /var/lib/ceph/osd/ceph-0/block type kernel
2019-08-09 09:41:23.086272 7f8318482e00  1 bdev(0x5598b10c6480 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
2019-08-09 09:41:23.086445 7f8318482e00  1 bdev(0x5598b10c6480 /var/lib/ceph/osd/ceph-0/block) open size 479998054400 (0x6fc21d1000, 447GiB) block_size 4096 (4KiB) non-rotational
2019-08-09 09:41:23.086655 7f8318482e00  1 bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 3221225472 meta 0.4 kv 0.4 data 0.2
2019-08-09 09:41:23.086710 7f8318482e00  1 bdev create path /var/lib/ceph/osd/ceph-0/block type kernel
2019-08-09 09:41:23.086714 7f8318482e00  1 bdev(0x5598b10c7200 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
2019-08-09 09:41:23.086832 7f8318482e00  1 bdev(0x5598b10c7200 /var/lib/ceph/osd/ceph-0/block) open size 479998054400 (0x6fc21d1000, 447GiB) block_size 4096 (4KiB) non-rotational
2019-08-09 09:41:23.086841 7f8318482e00  1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 447GiB
2019-08-09 09:41:23.086860 7f8318482e00  1 bluefs mount

.............

2019-08-09 09:41:31.218410 7f82fde89700  1 osd.0 pg_epoch: 51 pg[1.25( v 48'852 (0'0,48'852] local-lis/les=44/45 n=33 ec=14/14 lis/c 44/44 les/c/f 45/45/0 51/51/51) [0,2,1] r=0 lpr=51 pi=[44,51)/1 crt=48'852 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
2019-08-09 09:42:40.723611 7f82f5678700  0 log_channel(cluster) log [DBG] : 1.71 scrub starts
2019-08-09 09:42:41.106344 7f82f5678700  0 log_channel(cluster) log [DBG] : 1.71 scrub ok
2019-08-09 09:42:42.636274 7f82f5678700  0 log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
2019-08-09 09:42:42.722739 7f82f5678700  0 log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
2019-08-09 09:42:47.475363 7f82f667a700  0 log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
2019-08-09 09:42:47.601239 7f82f2672700  0 log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
2019-08-09 09:44:19.267335 7f83126dd700  0 bad crc in data 403249118 != exp 4155013848
2019-08-09 09:44:19.267371 7f83126dd700  0 -- 172.16.1.66:6802/3165815 >> 172.16.1.67:6801/255051 conn(0x5598c22d8800 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=308 cs=1 l=0).fault initiating reconnect
2019-08-09 09:44:19.268549 7f83126dd700  0 -- 172.16.1.66:6802/3165815 >> 172.16.1.67:6801/255051 conn(0x5598edf1c000 :6802 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 2 vs existing csq=2 existing_state=STATE_CONNECTING_WAIT_CONNECT_REPLY
2019-08-09 09:45:29.755638 7f83126dd700  0 bad crc in data 1000574999 != exp 0
2019-08-09 10:11:37.969356 7f8304e97700  4 rocksdb: [/mnt/pve/store/tlamprecht/sources/ceph/ceph-12.2.12/src/rocksdb/db/db_impl_write.cc:725] [default] New memtable created with log file: #58. Immutable memtables: 0.


ceph mon Log:
Code:
2019-08-09 09:40:48.642025 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882953 : cluster [DBG] pgmap v882913: 152 pgs: 152 active+clean; 18.5GiB data, 58.1GiB used, 1.25TiB / 1.31TiB avail; 48.7MiB/s rd, 46.2MiB/s wr, 201op/s
2019-08-09 09:40:50.661913 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882954 : cluster [DBG] pgmap v882914: 152 pgs: 152 active+clean; 18.6GiB data, 58.5GiB used, 1.25TiB / 1.31TiB avail; 50.0MiB/s rd, 46.6MiB/s wr, 169op/s
2019-08-09 09:40:52.682008 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882955 : cluster [DBG] pgmap v882915: 152 pgs: 152 active+clean; 18.6GiB data, 58.8GiB used, 1.25TiB / 1.31TiB avail; 43.8MiB/s rd, 43.9MiB/s wr, 161op/s
2019-08-09 09:41:02.683630 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302395 : cluster [DBG] osd.0 172.16.1.66:6801/2179 reported immediately failed by osd.1 172.16.1.67:6800/255051
2019-08-09 09:41:02.683678 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302396 : cluster [INF] osd.0 failed (root=default,host=cca-pve1) (connection refused reported by osd.1)
2019-08-09 09:41:02.683824 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302397 : cluster [DBG] osd.0 172.16.1.66:6801/2179 reported immediately failed by osd.1 172.16.1.67:6800/255051
2019-08-09 09:41:02.683928 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302398 : cluster [DBG] osd.0 172.16.1.66:6801/2179 reported immediately failed by osd.2 172.16.1.68:6801/2327
2019-08-09 09:41:02.684005 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302399 : cluster [DBG] osd.0 172.16.1.66:6801/2179 reported immediately failed by osd.1 172.16.1.67:6800/255051
2019-08-09 09:41:02.684070 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302400 : cluster [DBG] osd.0 172.16.1.66:6801/2179 reported immediately failed by osd.2 172.16.1.68:6801/2327
2019-08-09 09:41:02.684194 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302401 : cluster [DBG] osd.0 172.16.1.66:6801/2179 reported immediately failed by osd.2 172.16.1.68:6801/2327
2019-08-09 09:40:54.701985 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882956 : cluster [DBG] pgmap v882916: 152 pgs: 152 active+clean; 18.8GiB data, 59.0GiB used, 1.25TiB / 1.31TiB avail; 53.2MiB/s rd, 49.5MiB/s wr, 206op/s
2019-08-09 09:40:56.722010 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882957 : cluster [DBG] pgmap v882917: 152 pgs: 152 active+clean; 18.8GiB data, 59.1GiB used, 1.25TiB / 1.31TiB avail; 43.2MiB/s rd, 42.3MiB/s wr, 150op/s
2019-08-09 09:40:58.741941 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882958 : cluster [DBG] pgmap v882918: 152 pgs: 152 active+clean; 18.9GiB data, 59.3GiB used, 1.25TiB / 1.31TiB avail; 43.1MiB/s rd, 42.3MiB/s wr, 186op/s
2019-08-09 09:41:00.742434 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882959 : cluster [DBG] pgmap v882919: 152 pgs: 152 active+clean; 19.0GiB data, 59.6GiB used, 1.25TiB / 1.31TiB avail; 43.7MiB/s rd, 43.7MiB/s wr, 153op/s
2019-08-09 09:41:02.734281 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302402 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2019-08-09 09:41:02.734332 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302403 : cluster [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN)
2019-08-09 09:41:02.740109 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302404 : cluster [DBG] osdmap e49: 3 total, 2 up, 3 in
2019-08-09 09:41:02.761931 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882960 : cluster [DBG] pgmap v882921: 152 pgs: 55 stale+active+clean, 97 active+clean; 19.1GiB data, 59.8GiB used, 1.25TiB / 1.31TiB avail; 46.7MiB/s rd, 46.9MiB/s wr, 173op/s
2019-08-09 09:41:03.776116 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302405 : cluster [DBG] osdmap e50: 3 total, 2 up, 3 in
2019-08-09 09:41:04.798270 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302406 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 10 pgs peering (PG_AVAILABILITY)
2019-08-09 09:41:06.828235 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302407 : cluster [WRN] Health check failed: Degraded data redundancy: 2820/14880 objects degraded (18.952%), 85 pgs degraded (PG_DEGRADED)
2019-08-09 09:41:08.928483 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302408 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 10 pgs peering)
2019-08-09 09:41:04.781926 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882961 : cluster [DBG] pgmap v882923: 152 pgs: 38 stale+active+clean, 66 peering, 48 active+clean; 19.1GiB data, 60.1GiB used, 1.25TiB / 1.31TiB avail; 4.40MiB/s rd, 36.8MiB/s wr, 124op/s
2019-08-09 09:41:06.802181 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882962 : cluster [DBG] pgmap v882924: 152 pgs: 1 active+undersized, 85 active+undersized+degraded, 66 peering; 19.2GiB data, 60.3GiB used, 1.25TiB / 1.31TiB avail; 42.8MiB/s wr, 51op/s; 2820/14880 objects degraded (18.952%)
2019-08-09 09:41:08.822053 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882963 : cluster [DBG] pgmap v882925: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 19.4GiB data, 60.4GiB used, 1.25TiB / 1.31TiB avail; 0B/s rd, 48.7MiB/s wr, 127op/s; 4996/14988 objects degraded (33.333%)
2019-08-09 09:41:10.841885 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882964 : cluster [DBG] pgmap v882926: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 19.4GiB data, 60.4GiB used, 1.25TiB / 1.31TiB avail; 0B/s rd, 38.7MiB/s wr, 110op/s; 4996/14988 objects degraded (33.333%)
2019-08-09 09:41:13.826019 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302409 : cluster [WRN] Health check update: Degraded data redundancy: 5030/15090 objects degraded (33.333%), 150 pgs degraded (PG_DEGRADED)
2019-08-09 09:41:19.021732 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302410 : cluster [WRN] Health check update: Degraded data redundancy: 5138/15414 objects degraded (33.333%), 150 pgs degraded (PG_DEGRADED)
2019-08-09 09:41:12.862232 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882965 : cluster [DBG] pgmap v882927: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 19.5GiB data, 60.4GiB used, 1.25TiB / 1.31TiB avail; 0B/s rd, 44.5MiB/s wr, 112op/s; 5030/15090 objects degraded (33.333%)
2019-08-09 09:41:14.882109 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882966 : cluster [DBG] pgmap v882928: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 19.6GiB data, 60.8GiB used, 1.25TiB / 1.31TiB avail; 20.0MiB/s rd, 43.3MiB/s wr, 143op/s; 5050/15150 objects degraded (33.333%)
2019-08-09 09:41:16.902060 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882967 : cluster [DBG] pgmap v882929: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 19.8GiB data, 61.3GiB used, 1.25TiB / 1.31TiB avail; 32.1MiB/s rd, 55.9MiB/s wr, 161op/s; 5099/15297 objects degraded (33.333%)
2019-08-09 09:41:18.922036 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882968 : cluster [DBG] pgmap v882930: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 19.9GiB data, 61.6GiB used, 1.25TiB / 1.31TiB avail; 59.2MiB/s rd, 58.8MiB/s wr, 241op/s; 5138/15414 objects degraded (33.333%)
2019-08-09 09:41:20.942014 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882969 : cluster [DBG] pgmap v882931: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 19.9GiB data, 61.6GiB used, 1.25TiB / 1.31TiB avail; 46.9MiB/s rd, 46.9MiB/s wr, 174op/s; 5138/15414 objects degraded (33.333%)
2019-08-09 09:41:24.850668 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302411 : cluster [WRN] Health check update: Degraded data redundancy: 5188/15564 objects degraded (33.333%), 150 pgs degraded (PG_DEGRADED)
2019-08-09 09:41:29.850916 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302413 : cluster [WRN] Health check update: Degraded data redundancy: 5295/15885 objects degraded (33.333%), 150 pgs degraded (PG_DEGRADED)
2019-08-09 09:41:31.150548 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302416 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2019-08-09 09:41:31.150587 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302417 : cluster [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (1 osds) down)
2019-08-09 09:41:31.199142 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302418 : cluster [INF] osd.0 172.16.1.66:6801/3165815 boot
2019-08-09 09:41:31.199209 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302419 : cluster [DBG] osdmap e51: 3 total, 3 up, 3 in
2019-08-09 09:41:22.962167 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882970 : cluster [DBG] pgmap v882932: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 20.1GiB data, 62.0GiB used, 1.25TiB / 1.31TiB avail; 62.5MiB/s rd, 63.4MiB/s wr, 206op/s; 5188/15564 objects degraded (33.333%)
2019-08-09 09:41:24.982039 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882971 : cluster [DBG] pgmap v882933: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 20.3GiB data, 62.3GiB used, 1.25TiB / 1.31TiB avail; 63.9MiB/s rd, 62.8MiB/s wr, 252op/s; 5220/15660 objects degraded (33.333%)
2019-08-09 09:41:27.002175 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882972 : cluster [DBG] pgmap v882934: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 20.4GiB data, 62.4GiB used, 1.25TiB / 1.31TiB avail; 65.2MiB/s rd, 68.7MiB/s wr, 229op/s; 5258/15774 objects degraded (33.333%)
2019-08-09 09:41:29.022039 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882973 : cluster [DBG] pgmap v882935: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 20.6GiB data, 62.5GiB used, 1.25TiB / 1.31TiB avail; 63.4MiB/s rd, 64.7MiB/s wr, 268op/s; 5295/15885 objects degraded (33.333%)
2019-08-09 09:41:31.022657 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882974 : cluster [DBG] pgmap v882936: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 20.6GiB data, 62.7GiB used, 1.25TiB / 1.31TiB avail; 47.9MiB/s rd, 51.9MiB/s wr, 190op/s; 5295/15885 objects degraded (33.333%)
2019-08-09 09:41:32.199902 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302421 : cluster [DBG] osdmap e52: 3 total, 3 up, 3 in
2019-08-09 09:41:34.851132 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302422 : cluster [WRN] Health check update: Degraded data redundancy: 5332/15996 objects degraded (33.333%), 150 pgs degraded (PG_DEGRADED)
2019-08-09 09:41:39.851374 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302423 : cluster [WRN] Health check update: Degraded data redundancy: 376/16191 objects degraded (2.322%), 116 pgs degraded (PG_DEGRADED)
2019-08-09 09:41:33.041967 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882975 : cluster [DBG] pgmap v882939: 152 pgs: 2 active+undersized, 150 active+undersized+degraded; 20.7GiB data, 63.1GiB used, 1.25TiB / 1.31TiB avail; 53.1MiB/s rd, 55.5MiB/s wr, 175op/s; 5332/15996 objects degraded (33.333%)
2019-08-09 09:41:35.062051 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882976 : cluster [DBG] pgmap v882940: 152 pgs: 9 active+clean, 40 active+recovery_wait+degraded, 2 active+undersized, 101 active+undersized+degraded; 20.8GiB data, 63.6GiB used, 1.25TiB / 1.31TiB avail; 26.6MiB/s rd, 44.1MiB/s wr, 178op/s; 3713/16041 objects degraded (23.147%); 258B/s, 0objects/s recovering
2019-08-09 09:41:37.082175 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882977 : cluster [DBG] pgmap v882941: 152 pgs: 33 active+clean, 119 active+recovery_wait+degraded; 20.9GiB data, 63.6GiB used, 1.25TiB / 1.31TiB avail; 0B/s rd, 40.8MiB/s wr, 71op/s; 388/16131 objects degraded (2.405%); 10.4MiB/s, 2objects/s recovering
2019-08-09 09:41:39.102098 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882978 : cluster [DBG] pgmap v882942: 152 pgs: 36 active+clean, 116 active+recovery_wait+degraded; 21.0GiB data, 63.9GiB used, 1.25TiB / 1.31TiB avail; 2.79MiB/s rd, 50.7MiB/s wr, 151op/s; 376/16191 objects degraded (2.322%); 16.3MiB/s, 4objects/s recovering
2019-08-09 09:41:41.126091 mgr.cca-pve1 client.349377 172.16.1.66:0/1513459821 882979 : cluster [DBG] pgmap v882943: 152 pgs: 40 active+clean, 1 active+recovering+degraded, 111 active+recovery_wait+degraded; 21.1GiB data, 64.2GiB used, 1.25TiB / 1.31TiB avail; 12.2MiB/s rd, 52.2MiB/s wr, 144op/s; 363/16272 objects degraded (2.231%); 18.4MiB/s, 4objects/s recovering
2019-08-09 09:41:44.851674 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302424 : cluster [WRN] Health check update: Degraded data redundancy: 346/16332 objects degraded (2.119%), 108 pgs degraded (PG_DEGRADED)
2019-08-09 09:41:49.851889 mon.cca-pve1 mon.0 172.16.1.66:6789/0 302426 : cluster [WRN] Health check update: Degraded data redundancy: 299/16578 objects degraded (1.804%), 92 pgs degraded (PG_DEGRADED)
 
How does your hardware setup look like? And on what ceph version are you running?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!