Cepth HEALTH_WARN - mon.pve241 has 21% avail

fpausp

Renowned Member
Aug 31, 2010
641
43
93
Austria near Vienna
So wie es aussieht geht mir der Platz aus...

upload_2018-11-21_13-25-32.png

Code:
2018-11-21 07:00:00.000166 mon.pve241 mon.0 10.10.10.241:6789/0 94986 : cluster [WRN] overall HEALTH_WARN mon pve241 is low on available space
2018-11-21 07:45:55.803736 mon.pve241 mon.0 10.10.10.241:6789/0 95535 : cluster [WRN] reached concerning levels of available space on local monitor storage (27% free)
2018-11-21 07:46:55.804154 mon.pve241 mon.0 10.10.10.241:6789/0 95546 : cluster [WRN] reached concerning levels of available space on local monitor storage (26% free)
2018-11-21 08:00:00.000155 mon.pve241 mon.0 10.10.10.241:6789/0 95697 : cluster [WRN] overall HEALTH_WARN mon pve241 is low on available space
2018-11-21 08:03:55.809793 mon.pve241 mon.0 10.10.10.241:6789/0 95749 : cluster [WRN] reached concerning levels of available space on local monitor storage (27% free)
2018-11-21 08:04:55.810185 mon.pve241 mon.0 10.10.10.241:6789/0 95762 : cluster [WRN] reached concerning levels of available space on local monitor storage (26% free)
2018-11-21 09:00:00.000231 mon.pve241 mon.0 10.10.10.241:6789/0 96415 : cluster [WRN] overall HEALTH_WARN mon pve241 is low on available space
2018-11-21 09:13:55.834528 mon.pve241 mon.0 10.10.10.241:6789/0 96566 : cluster [WRN] reached concerning levels of available space on local monitor storage (27% free)
2018-11-21 09:14:55.834901 mon.pve241 mon.0 10.10.10.241:6789/0 96577 : cluster [WRN] reached concerning levels of available space on local monitor storage (26% free)
2018-11-21 09:42:55.844388 mon.pve241 mon.0 10.10.10.241:6789/0 96914 : cluster [WRN] reached concerning levels of available space on local monitor storage (27% free)
2018-11-21 09:43:55.844759 mon.pve241 mon.0 10.10.10.241:6789/0 96927 : cluster [WRN] reached concerning levels of available space on local monitor storage (26% free)
2018-11-21 09:57:55.849326 mon.pve241 mon.0 10.10.10.241:6789/0 97079 : cluster [WRN] reached concerning levels of available space on local monitor storage (27% free)
2018-11-21 09:58:55.849718 mon.pve241 mon.0 10.10.10.241:6789/0 97091 : cluster [WRN] reached concerning levels of available space on local monitor storage (26% free)
2018-11-21 10:00:00.000136 mon.pve241 mon.0 10.10.10.241:6789/0 97102 : cluster [WRN] overall HEALTH_WARN mon pve241 is low on available space
2018-11-21 10:28:31.506744 mon.pve241 mon.0 10.10.10.241:6789/0 97397 : cluster [INF] osd.4 marked itself down
2018-11-21 10:28:31.506829 mon.pve241 mon.0 10.10.10.241:6789/0 97398 : cluster [INF] osd.5 marked itself down
2018-11-21 10:28:31.557928 mon.pve241 mon.0 10.10.10.241:6789/0 97399 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)
2018-11-21 10:28:31.557974 mon.pve241 mon.0 10.10.10.241:6789/0 97400 : cluster [WRN] Health check failed: 1 host (2 osds) down (OSD_HOST_DOWN)
2018-11-21 10:28:31.557928 mon.pve241 mon.0 10.10.10.241:6789/0 97399 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)
2018-11-21 10:28:31.557974 mon.pve241 mon.0 10.10.10.241:6789/0 97400 : cluster [WRN] Health check failed: 1 host (2 osds) down (OSD_HOST_DOWN)
2018-11-21 10:40:29.084942 mon.pve241 mon.0 10.10.10.241:6789/0 22 : cluster [INF] osd.0 10.10.10.241:6800/2028 boot
2018-11-21 10:40:30.780553 mon.pve242 mon.1 10.10.10.242:6789/0 12 : cluster [INF] mon.pve242 calling monitor election
2018-11-21 10:40:30.953322 mon.pve243 mon.2 10.10.10.243:6789/0 1 : cluster [INF] mon.pve243 calling monitor election
2018-11-21 10:40:31.056160 mon.pve241 mon.0 10.10.10.241:6789/0 34 : cluster [INF] mon.pve241 calling monitor election
2018-11-21 10:40:34.934117 mon.pve242 mon.1 10.10.10.242:6789/0 13 : cluster [WRN] message from mon.0 was stamped 0.269715s in the future, clocks not synchronized
2018-11-21 10:40:35.109841 mon.pve243 mon.2 10.10.10.243:6789/0 2 : cluster [WRN] message from mon.0 was stamped 0.094069s in the future, clocks not synchronized
2018-11-21 10:40:35.186459 mon.pve241 mon.0 10.10.10.241:6789/0 35 : cluster [INF] mon.pve241 is new leader, mons pve241,pve242,pve243 in quorum (ranks 0,1,2)
2018-11-21 10:40:35.194794 mon.pve241 mon.0 10.10.10.241:6789/0 36 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.2685s > max 0.05s
2018-11-21 10:40:35.206384 mon.pve241 mon.0 10.10.10.241:6789/0 41 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum pve241,pve242)
2018-11-21 10:40:35.211665 mon.pve241 mon.0 10.10.10.241:6789/0 42 : cluster [WRN] mon.2 10.10.10.243:6789/0 clock skew 0.0830282s > max 0.05s
2018-11-21 10:40:35.252098 mon.pve241 mon.0 10.10.10.241:6789/0 45 : cluster [WRN] overall HEALTH_WARN 3 osds down; 1 host (2 osds) down; clock skew detected on mon.pve242; mon pve241 is low on available space
2018-11-21 10:40:35.275035 mon.pve241 mon.0 10.10.10.241:6789/0 46 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-11-21 10:40:35.289401 mon.pve241 mon.0 10.10.10.241:6789/0 48 : cluster [INF] osd.3 10.10.10.242:6800/1930 boot
2018-11-21 10:40:36.241472 mon.pve241 mon.0 10.10.10.241:6789/0 51 : cluster [WRN] Health check update: clock skew detected on mon.pve242, mon.pve243 (MON_CLOCK_SKEW)
2018-11-21 10:40:36.322278 mon.pve241 mon.0 10.10.10.241:6789/0 52 : cluster [WRN] Health check failed: Reduced data availability: 95 pgs inactive, 188 pgs peering (PG_AVAILABILITY)
2018-11-21 10:40:37.338576 mon.pve241 mon.0 10.10.10.241:6789/0 54 : cluster [INF] osd.2 10.10.10.242:6804/2096 boot
2018-11-21 10:40:40.935485 mon.pve243 mon.2 10.10.10.243:6789/0 7 : cluster [WRN] message from mon.0 was stamped 0.190638s in the future, clocks not synchronized
2018-11-21 10:40:41.111081 mon.pve241 mon.0 10.10.10.241:6789/0 58 : cluster [WRN] Health check failed: Degraded data redundancy: 1100/7329 objects degraded (15.009%), 56 pgs degraded (PG_DEGRADED)
2018-11-21 10:40:43.179529 mon.pve241 mon.0 10.10.10.241:6789/0 63 : cluster [WRN] Health check update: 1 osds down (OSD_DOWN)
2018-11-21 10:40:43.179569 mon.pve241 mon.0 10.10.10.241:6789/0 64 : cluster [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (2 osds) down)
2018-11-21 10:40:43.194391 mon.pve241 mon.0 10.10.10.241:6789/0 65 : cluster [INF] osd.5 10.10.10.243:6800/1913 boot
2018-11-21 10:40:45.206530 mon.pve241 mon.0 10.10.10.241:6789/0 69 : cluster [WRN] message from mon.1 was stamped 0.093323s in the future, clocks not synchronized
2018-11-21 10:40:46.209890 mon.pve241 mon.0 10.10.10.241:6789/0 72 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-11-21 10:40:46.231773 mon.pve241 mon.0 10.10.10.241:6789/0 73 : cluster [INF] osd.4 10.10.10.243:6804/2066 boot
2018-11-21 10:40:46.244388 mon.pve241 mon.0 10.10.10.241:6789/0 76 : cluster [WRN] Health check update: Reduced data availability: 128 pgs peering (PG_AVAILABILITY)
2018-11-21 10:40:46.244465 mon.pve241 mon.0 10.10.10.241:6789/0 77 : cluster [WRN] Health check update: Degraded data redundancy: 796/7329 objects degraded (10.861%), 41 pgs degraded (PG_DEGRADED)
2018-11-21 10:40:51.244973 mon.pve241 mon.0 10.10.10.241:6789/0 79 : cluster [WRN] Health check update: Degraded data redundancy: 246/7329 objects degraded (3.357%), 25 pgs degraded (PG_DEGRADED)
2018-11-21 10:40:56.245465 mon.pve241 mon.0 10.10.10.241:6789/0 83 : cluster [WRN] Health check update: Degraded data redundancy: 10/7329 objects degraded (0.136%), 9 pgs degraded (PG_DEGRADED)
2018-11-21 10:40:59.122613 mon.pve241 mon.0 10.10.10.241:6789/0 84 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/7329 objects degraded (0.055%), 4 pgs degraded)
2018-11-21 10:41:01.124500 mon.pve241 mon.0 10.10.10.241:6789/0 85 : cluster [WRN] Health check failed: 1 slow requests are blocked > 32 sec. Implicated osds 1 (REQUEST_SLOW)
2018-11-21 10:41:05.216166 mon.pve241 mon.0 10.10.10.241:6789/0 89 : cluster [WRN] mon.2 10.10.10.243:6789/0 clock skew 0.0754391s > max 0.05s
2018-11-21 10:41:06.073402 mon.pve243 mon.2 10.10.10.243:6789/0 17 : cluster [WRN] message from mon.0 was stamped 0.065261s in the future, clocks not synchronized
2018-11-21 10:41:06.246205 mon.pve241 mon.0 10.10.10.241:6789/0 90 : cluster [WRN] Health check update: 11 slow requests are blocked > 32 sec. Implicated osds 1 (REQUEST_SLOW)
2018-11-21 10:41:06.246498 mon.pve241 mon.0 10.10.10.241:6789/0 91 : cluster [WRN] Health check update: clock skew detected on mon.pve243 (MON_CLOCK_SKEW)
2018-11-21 10:41:11.218016 mon.pve241 mon.0 10.10.10.241:6789/0 92 : cluster [WRN] reached concerning levels of available space on local monitor storage (26% free)
2018-11-21 10:41:29.142943 mon.pve241 mon.0 10.10.10.241:6789/0 100 : cluster [WRN] Health check update: Reduced data availability: 40 pgs inactive, 128 pgs peering (PG_AVAILABILITY)
2018-11-21 10:41:37.158541 mon.pve241 mon.0 10.10.10.241:6789/0 103 : cluster [WRN] Health check update: Reduced data availability: 63 pgs inactive, 128 pgs peering (PG_AVAILABILITY)
2018-11-21 10:41:43.164426 mon.pve241 mon.0 10.10.10.241:6789/0 105 : cluster [WRN] Health check update: Reduced data availability: 96 pgs inactive, 128 pgs peering (PG_AVAILABILITY)
2018-11-21 10:41:51.251223 mon.pve241 mon.0 10.10.10.241:6789/0 106 : cluster [WRN] Health check update: Reduced data availability: 128 pgs inactive, 128 pgs peering (PG_AVAILABILITY)
2018-11-21 10:42:06.252540 mon.pve241 mon.0 10.10.10.241:6789/0 110 : cluster [INF] Health check cleared: MON_CLOCK_SKEW (was: clock skew detected on mon.pve243)
2018-11-21 10:42:35.189323 mon.pve241 mon.0 10.10.10.241:6789/0 120 : cluster [WRN] Health check update: 12 slow requests are blocked > 32 sec. Implicated osds 1 (REQUEST_SLOW)
2018-11-21 11:00:00.000218 mon.pve241 mon.0 10.10.10.241:6789/0 366 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 128 pgs inactive, 128 pgs peering; 12 slow requests are blocked > 32 sec. Implicated osds 1; mon pve241 is low on available space
2018-11-21 11:10:36.411496 mon.pve241 mon.0 10.10.10.241:6789/0 538 : cluster [INF] osd.1 marked down after no beacon for 900.076937 seconds
2018-11-21 11:10:36.412382 mon.pve241 mon.0 10.10.10.241:6789/0 539 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-11-21 11:10:38.425771 mon.pve241 mon.0 10.10.10.241:6789/0 542 : cluster [WRN] Health check update: Reduced data availability: 109 pgs inactive, 96 pgs peering (PG_AVAILABILITY)
2018-11-21 11:10:38.425807 mon.pve241 mon.0 10.10.10.241:6789/0 543 : cluster [WRN] Health check failed: Degraded data redundancy: 349/7329 objects degraded (4.762%), 10 pgs degraded (PG_DEGRADED)
2018-11-21 11:10:38.425832 mon.pve241 mon.0 10.10.10.241:6789/0 544 : cluster [INF] Health check cleared: REQUEST_SLOW (was: 12 slow requests are blocked > 32 sec. Implicated osds 1)
2018-11-21 11:10:40.459929 mon.pve241 mon.0 10.10.10.241:6789/0 545 : cluster [WRN] Health check failed: 12 slow requests are blocked > 32 sec. Implicated osds 1 (REQUEST_SLOW)
2018-11-21 11:10:44.613457 mon.pve241 mon.0 10.10.10.241:6789/0 548 : cluster [WRN] Health check update: Degraded data redundancy: 1359/7329 objects degraded (18.543%), 72 pgs degraded (PG_DEGRADED)
2018-11-21 11:10:44.613512 mon.pve241 mon.0 10.10.10.241:6789/0 549 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 13 pgs inactive)
2018-11-21 11:10:51.415162 mon.pve241 mon.0 10.10.10.241:6789/0 550 : cluster [WRN] Health check update: Degraded data redundancy: 1354/7329 objects degraded (18.475%), 72 pgs degraded (PG_DEGRADED)
2018-11-21 11:11:08.593148 mon.pve241 mon.0 10.10.10.241:6789/0 560 : cluster [INF] Health check cleared: REQUEST_SLOW (was: 12 slow requests are blocked > 32 sec. Implicated osds 1)
2018-11-21 11:11:10.603147 mon.pve241 mon.0 10.10.10.241:6789/0 561 : cluster [WRN] Health check failed: 12 slow requests are blocked > 32 sec. Implicated osds 1 (REQUEST_SLOW)
2018-11-21 11:11:38.494856 mon.pve241 mon.0 10.10.10.241:6789/0 568 : cluster [WRN] Health check update: Degraded data redundancy: 1354/7329 objects degraded (18.475%), 72 pgs degraded, 128 pgs undersized (PG_DEGRADED)
2018-11-21 11:13:11.228719 mon.pve241 mon.0 10.10.10.241:6789/0 588 : cluster [WRN] reached concerning levels of available space on local monitor storage (25% free)
2018-11-21 11:14:11.229110 mon.pve241 mon.0 10.10.10.241:6789/0 598 : cluster [WRN] reached concerning levels of available space on local monitor storage (21% free)
2018-11-21 11:15:11.827837 mon.pve241 mon.0 10.10.10.241:6789/0 613 : cluster [INF] osd.0 marked itself down
2018-11-21 11:15:12.667166 mon.pve241 mon.0 10.10.10.241:6789/0 614 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-11-21 11:15:12.667205 mon.pve241 mon.0 10.10.10.241:6789/0 615 : cluster [WRN] Health check failed: 1 host (2 osds) down (OSD_HOST_DOWN)
2018-11-21 11:15:24.664517 mon.pve242 mon.1 10.10.10.242:6789/0 427 : cluster [INF] mon.pve242 calling monitor election
2018-11-21 11:15:24.706179 mon.pve243 mon.2 10.10.10.243:6789/0 385 : cluster [INF] mon.pve243 calling monitor election
2018-11-21 11:15:29.668765 mon.pve242 mon.1 10.10.10.242:6789/0 428 : cluster [INF] mon.pve242 is new leader, mons pve242,pve243 in quorum (ranks 1,2)
2018-11-21 11:15:29.676424 mon.pve242 mon.1 10.10.10.242:6789/0 433 : cluster [WRN] Health check failed: 1/3 mons down, quorum pve242,pve243 (MON_DOWN)
2018-11-21 11:15:29.676470 mon.pve242 mon.1 10.10.10.242:6789/0 434 : cluster [INF] Health check cleared: MON_DISK_LOW (was: mon pve241 is low on available space)
2018-11-21 11:15:29.684371 mon.pve242 mon.1 10.10.10.242:6789/0 436 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; Degraded data redundancy: 1354/7329 objects degraded (18.475%), 72 pgs degraded, 128 pgs undersized; 12 slow requests are blocked > 32 sec. Implicated osds 1; 1/3 mons down, quorum pve242,pve243
2018-11-21 11:15:52.279739 mon.pve242 mon.1 10.10.10.242:6789/0 447 : cluster [INF] mon.pve242 calling monitor election
2018-11-21 11:15:52.321092 mon.pve243 mon.2 10.10.10.243:6789/0 391 : cluster [INF] mon.pve243 calling monitor election
2018-11-21 11:15:52.322070 mon.pve242 mon.1 10.10.10.242:6789/0 448 : cluster [WRN] message from mon.0 was stamped 0.680893s in the future, clocks not synchronized
2018-11-21 11:15:52.362662 mon.pve243 mon.2 10.10.10.243:6789/0 392 : cluster [WRN] message from mon.0 was stamped 0.640419s in the future, clocks not synchronized
2018-11-21 11:15:52.925478 mon.pve241 mon.0 10.10.10.241:6789/0 1 : cluster [INF] mon.pve241 calling monitor election
2018-11-21 11:15:52.966399 mon.pve241 mon.0 10.10.10.241:6789/0 2 : cluster [INF] mon.pve241 calling monitor election
2018-11-21 11:15:52.975798 mon.pve241 mon.0 10.10.10.241:6789/0 3 : cluster [INF] mon.pve241 is new leader, mons pve241,pve242,pve243 in quorum (ranks 0,1,2)
2018-11-21 11:15:52.992420 mon.pve241 mon.0 10.10.10.241:6789/0 4 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.681076s > max 0.05s
2018-11-21 11:15:52.992657 mon.pve241 mon.0 10.10.10.241:6789/0 5 : cluster [WRN] mon.2 10.10.10.243:6789/0 clock skew 0.639708s > max 0.05s
2018-11-21 11:15:53.010933 mon.pve241 mon.0 10.10.10.241:6789/0 10 : cluster [WRN] Health check failed: clock skew detected on mon.pve242, mon.pve243 (MON_CLOCK_SKEW)
2018-11-21 11:15:53.011039 mon.pve241 mon.0 10.10.10.241:6789/0 11 : cluster [WRN] Health check failed: mon pve241 is low on available space (MON_DISK_LOW)
2018-11-21 11:15:53.011283 mon.pve241 mon.0 10.10.10.241:6789/0 12 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum pve242,pve243)
2018-11-21 11:15:53.015085 mon.pve241 mon.0 10.10.10.241:6789/0 13 : cluster [INF] Active manager daemon pve241 restarted
2018-11-21 11:15:53.015212 mon.pve241 mon.0 10.10.10.241:6789/0 14 : cluster [INF] Activating manager daemon pve241
2018-11-21 11:15:53.027367 mon.pve241 mon.0 10.10.10.241:6789/0 15 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; Degraded data redundancy: 1354/7329 objects degraded (18.475%), 72 pgs degraded, 128 pgs undersized; 12 slow requests are blocked > 32 sec. Implicated osds 1; clock skew detected on mon.pve242, mon.pve243; mon pve241 is low on available space
2018-11-21 11:15:54.677312 mon.pve241 mon.0 10.10.10.241:6789/0 18 : cluster [INF] Manager daemon pve241 is now available
2018-11-21 11:15:56.123928 mon.pve241 mon.0 10.10.10.241:6789/0 20 : cluster [WRN] Health check update: Degraded data redundancy: 2443/7329 objects degraded (33.333%), 128 pgs degraded, 128 pgs undersized (PG_DEGRADED)
2018-11-21 11:15:56.123967 mon.pve241 mon.0 10.10.10.241:6789/0 21 : cluster [INF] Health check cleared: REQUEST_SLOW (was: 12 slow requests are blocked > 32 sec. Implicated osds 1)
2018-11-21 11:15:57.921107 mon.pve241 mon.0 10.10.10.241:6789/0 22 : cluster [INF] Manager daemon pve241 is unresponsive.  No standby daemons available.
2018-11-21 11:15:57.921198 mon.pve241 mon.0 10.10.10.241:6789/0 23 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)
2018-11-21 11:15:59.434449 mon.pve241 mon.0 10.10.10.241:6789/0 25 : cluster [INF] Activating manager daemon pve243
2018-11-21 11:15:59.486501 mon.pve241 mon.0 10.10.10.241:6789/0 26 : cluster [INF] Health check cleared: MGR_DOWN (was: no active mgr)
2018-11-21 11:15:59.647215 mon.pve241 mon.0 10.10.10.241:6789/0 28 : cluster [INF] Manager daemon pve243 is now available
2018-11-21 11:16:01.726555 mon.pve241 mon.0 10.10.10.241:6789/0 33 : cluster [WRN] Health check update: 1 osds down (OSD_DOWN)
2018-11-21 11:16:01.726604 mon.pve241 mon.0 10.10.10.241:6789/0 34 : cluster [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (2 osds) down)
2018-11-21 11:16:01.814382 mon.pve241 mon.0 10.10.10.241:6789/0 35 : cluster [INF] osd.0 10.10.10.241:6801/1984 boot
2018-11-21 11:16:02.814073 mon.pve241 mon.0 10.10.10.241:6789/0 41 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-11-21 11:16:02.853923 mon.pve241 mon.0 10.10.10.241:6789/0 42 : cluster [INF] osd.1 10.10.10.241:6805/2143 boot
2018-11-21 11:16:07.484744 mon.pve241 mon.0 10.10.10.241:6789/0 47 : cluster [WRN] Health check update: Degraded data redundancy: 495/7329 objects degraded (6.754%), 63 pgs degraded, 40 pgs undersized (PG_DEGRADED)
2018-11-21 11:16:12.922177 mon.pve241 mon.0 10.10.10.241:6789/0 50 : cluster [WRN] Health check update: Degraded data redundancy: 113/7329 objects degraded (1.542%), 49 pgs degraded (PG_DEGRADED)
2018-11-21 11:16:17.922606 mon.pve241 mon.0 10.10.10.241:6789/0 52 : cluster [WRN] Health check update: Degraded data redundancy: 87/7329 objects degraded (1.187%), 37 pgs degraded (PG_DEGRADED)
2018-11-21 11:16:22.923194 mon.pve241 mon.0 10.10.10.241:6789/0 55 : cluster [WRN] Health check update: Degraded data redundancy: 54/7329 objects degraded (0.737%), 23 pgs degraded (PG_DEGRADED)
2018-11-21 11:16:27.923609 mon.pve241 mon.0 10.10.10.241:6789/0 58 : cluster [WRN] Health check update: Degraded data redundancy: 34/7329 objects degraded (0.464%), 14 pgs degraded (PG_DEGRADED)
2018-11-21 11:16:27.923927 mon.pve241 mon.0 10.10.10.241:6789/0 59 : cluster [INF] Health check cleared: MON_CLOCK_SKEW (was: clock skew detected on mon.pve242, mon.pve243)
2018-11-21 11:16:32.925684 mon.pve241 mon.0 10.10.10.241:6789/0 61 : cluster [WRN] Health check update: Degraded data redundancy: 11/7329 objects degraded (0.150%), 4 pgs degraded (PG_DEGRADED)
2018-11-21 11:16:37.093427 mon.pve241 mon.0 10.10.10.241:6789/0 63 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11/7329 objects degraded (0.150%), 4 pgs degraded)
2018-11-21 11:16:52.899859 mon.pve241 mon.0 10.10.10.241:6789/0 67 : cluster [WRN] reached concerning levels of available space on local monitor storage (21% free)
2018-11-21 11:19:58.949369 mon.pve241 mon.0 10.10.10.241:6789/0 103 : cluster [WRN] message from mon.2 was stamped 0.050202s in the future, clocks not synchronized
2018-11-21 11:20:04.686406 mon.pve241 mon.0 10.10.10.241:6789/0 105 : cluster [WRN] message from mon.2 was stamped 0.050403s in the future, clocks not synchronized
2018-11-21 11:25:10.430307 mon.pve241 mon.0 10.10.10.241:6789/0 162 : cluster [INF] osd.3 marked itself down
2018-11-21 11:25:10.430546 mon.pve241 mon.0 10.10.10.241:6789/0 163 : cluster [INF] osd.2 marked itself down
2018-11-21 11:25:10.482542 mon.pve241 mon.0 10.10.10.241:6789/0 164 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)
2018-11-21 11:25:10.482580 mon.pve241 mon.0 10.10.10.241:6789/0 165 : cluster [WRN] Health check failed: 1 host (2 osds) down (OSD_HOST_DOWN)
2018-11-21 11:25:11.648484 mon.pve241 mon.0 10.10.10.241:6789/0 169 : cluster [WRN] Health check failed: Reduced data availability: 12 pgs inactive, 59 pgs peering (PG_AVAILABILITY)
2018-11-21 11:25:22.703740 mon.pve241 mon.0 10.10.10.241:6789/0 172 : cluster [INF] mon.pve241 calling monitor election
2018-11-21 11:25:22.744516 mon.pve243 mon.2 10.10.10.243:6789/0 509 : cluster [INF] mon.pve243 calling monitor election
2018-11-21 11:25:27.717729 mon.pve241 mon.0 10.10.10.241:6789/0 173 : cluster [INF] mon.pve241 is new leader, mons pve241,pve243 in quorum (ranks 0,2)
2018-11-21 11:25:27.732035 mon.pve241 mon.0 10.10.10.241:6789/0 178 : cluster [WRN] Health check failed: 1/3 mons down, quorum pve241,pve243 (MON_DOWN)
2018-11-21 11:25:27.742095 mon.pve241 mon.0 10.10.10.241:6789/0 179 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; Reduced data availability: 12 pgs inactive, 59 pgs peering; Degraded data redundancy: 671/7329 objects degraded (9.155%), 35 pgs degraded; mon pve241 is low on available space; 1/3 mons down, quorum pve241,pve243
2018-11-21 11:25:28.733047 mon.pve241 mon.0 10.10.10.241:6789/0 181 : cluster [WRN] Health check update: Degraded data redundancy: 2443/7329 objects degraded (33.333%), 128 pgs degraded (PG_DEGRADED)
2018-11-21 11:25:28.733097 mon.pve241 mon.0 10.10.10.241:6789/0 182 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 12 pgs inactive, 59 pgs peering)
2018-11-21 11:25:47.276909 mon.pve241 mon.0 10.10.10.241:6789/0 190 : cluster [INF] mon.pve241 calling monitor election
2018-11-21 11:25:47.304152 mon.pve241 mon.0 10.10.10.241:6789/0 191 : cluster [INF] mon.pve241 is new leader, mons pve241,pve242,pve243 in quorum (ranks 0,1,2)
2018-11-21 11:25:47.320215 mon.pve241 mon.0 10.10.10.241:6789/0 196 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum pve241,pve243)
2018-11-21 11:25:47.322227 mon.pve241 mon.0 10.10.10.241:6789/0 197 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 1.12593s > max 0.05s
2018-11-21 11:25:47.323349 mon.pve241 mon.0 10.10.10.241:6789/0 198 : cluster [WRN] message from mon.1 was stamped 1.146361s in the future, clocks not synchronized
2018-11-21 11:25:47.323405 mon.pve243 mon.2 10.10.10.243:6789/0 516 : cluster [INF] mon.pve243 calling monitor election
2018-11-21 11:25:47.333755 mon.pve241 mon.0 10.10.10.241:6789/0 199 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; Degraded data redundancy: 2443/7329 objects degraded (33.333%), 128 pgs degraded; mon pve241 is low on available space
2018-11-21 11:25:47.973007 mon.pve241 mon.0 10.10.10.241:6789/0 200 : cluster [WRN] Health check failed: clock skew detected on mon.pve242 (MON_CLOCK_SKEW)
2018-11-21 11:25:48.418753 mon.pve242 mon.1 10.10.10.242:6789/0 1 : cluster [INF] mon.pve242 calling monitor election
2018-11-21 11:25:48.780695 mon.pve241 mon.0 10.10.10.241:6789/0 201 : cluster [INF] osd.5 marked itself down
2018-11-21 11:25:48.784734 mon.pve241 mon.0 10.10.10.241:6789/0 202 : cluster [INF] osd.4 marked itself down
2018-11-21 11:25:49.056407 mon.pve241 mon.0 10.10.10.241:6789/0 203 : cluster [WRN] Health check update: 4 osds down (OSD_DOWN)
2018-11-21 11:25:49.056446 mon.pve241 mon.0 10.10.10.241:6789/0 204 : cluster [WRN] Health check update: 2 hosts (4 osds) down (OSD_HOST_DOWN)
2018-11-21 11:26:29.671249 mon.pve241 mon.0 10.10.10.241:6789/0 233 : cluster [INF] Active manager daemon pve243 restarted
2018-11-21 11:26:29.671334 mon.pve241 mon.0 10.10.10.241:6789/0 234 : cluster [INF] Activating manager daemon pve243
2018-11-21 11:26:30.132758 mon.pve241 mon.0 10.10.10.241:6789/0 236 : cluster [INF] mon.pve241 calling monitor election
2018-11-21 11:26:30.140227 mon.pve241 mon.0 10.10.10.241:6789/0 237 : cluster [INF] mon.pve241 is new leader, mons pve241,pve242,pve243 in quorum (ranks 0,1,2)
2018-11-21 11:26:30.148586 mon.pve242 mon.1 10.10.10.242:6789/0 11 : cluster [INF] mon.pve242 calling monitor election
2018-11-21 11:26:30.153863 mon.pve241 mon.0 10.10.10.241:6789/0 242 : cluster [INF] Health check cleared: MON_CLOCK_SKEW (was: clock skew detected on mon.pve242)
2018-11-21 11:26:30.153921 mon.pve241 mon.0 10.10.10.241:6789/0 243 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum pve241,pve242)
2018-11-21 11:26:30.156311 mon.pve241 mon.0 10.10.10.241:6789/0 244 : cluster [WRN] mon.2 10.10.10.243:6789/0 clock skew 0.837373s > max 0.05s
2018-11-21 11:26:30.163692 mon.pve241 mon.0 10.10.10.241:6789/0 245 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; Degraded data redundancy: 2443/7329 objects degraded (33.333%), 128 pgs degraded; mon pve241 is low on available space
2018-11-21 11:26:30.984903 mon.pve243 mon.2 10.10.10.243:6789/0 1 : cluster [INF] mon.pve243 calling monitor election
2018-11-21 11:26:32.053158 mon.pve241 mon.0 10.10.10.241:6789/0 249 : cluster [INF] Manager daemon pve243 is now available
2018-11-21 11:26:32.986266 mon.pve241 mon.0 10.10.10.241:6789/0 250 : cluster [WRN] Health check failed: clock skew detected on mon.pve243 (MON_CLOCK_SKEW)
2018-11-21 11:26:39.958960 mon.pve241 mon.0 10.10.10.241:6789/0 255 : cluster [WRN] Health check update: 1 osds down (OSD_DOWN)
2018-11-21 11:26:39.959001 mon.pve241 mon.0 10.10.10.241:6789/0 256 : cluster [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (2 osds) down)
2018-11-21 11:26:39.966404 mon.pve241 mon.0 10.10.10.241:6789/0 257 : cluster [INF] osd.5 10.10.10.243:6801/1925 boot
2018-11-21 11:26:42.042172 mon.pve241 mon.0 10.10.10.241:6789/0 263 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-11-21 11:26:42.053604 mon.pve241 mon.0 10.10.10.241:6789/0 264 : cluster [INF] osd.4 10.10.10.243:6805/2084 boot
2018-11-21 11:26:44.059419 mon.pve241 mon.0 10.10.10.241:6789/0 267 : cluster [WRN] Health check update: Degraded data redundancy: 912/7329 objects degraded (12.444%), 48 pgs degraded (PG_DEGRADED)
2018-11-21 11:26:47.945184 mon.pve241 mon.0 10.10.10.241:6789/0 269 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 507/7329 objects degraded (6.918%), 26 pgs degraded)
2018-11-21 11:26:52.907256 mon.pve241 mon.0 10.10.10.241:6789/0 271 : cluster [WRN] reached concerning levels of available space on local monitor storage (21% free)
2018-11-21 11:27:02.990380 mon.pve241 mon.0 10.10.10.241:6789/0 274 : cluster [INF] Health check cleared: MON_CLOCK_SKEW (was: clock skew detected on mon.pve243)
2018-11-21 11:34:22.826795 mon.pve241 mon.0 10.10.10.241:6789/0 380 : cluster [INF] osd.4 marked itself down
2018-11-21 11:34:22.830751 mon.pve241 mon.0 10.10.10.241:6789/0 381 : cluster [INF] osd.5 marked itself down
2018-11-21 11:34:22.878192 mon.pve241 mon.0 10.10.10.241:6789/0 382 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)
2018-11-21 11:34:22.878241 mon.pve241 mon.0 10.10.10.241:6789/0 383 : cluster [WRN] Health check failed: 1 host (2 osds) down (OSD_HOST_DOWN)
2018-11-21 11:39:32.720127 mon.pve242 mon.1 10.10.10.242:6789/0 18 : cluster [INF] mon.pve242 calling monitor election
2018-11-21 11:39:32.733422 mon.pve241 mon.0 10.10.10.241:6789/0 58 : cluster [INF] mon.pve241 calling monitor election
2018-11-21 11:39:32.750802 mon.pve243 mon.2 10.10.10.243:6789/0 1 : cluster [INF] mon.pve243 calling monitor election
2018-11-21 11:39:35.995324 mon.pve241 mon.0 10.10.10.241:6789/0 59 : cluster [INF] mon.pve241 is new leader, mons pve241,pve242,pve243 in quorum (ranks 0,1,2)
2018-11-21 11:39:36.003491 mon.pve241 mon.0 10.10.10.241:6789/0 64 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum pve241,pve242)
2018-11-21 11:39:36.037494 mon.pve241 mon.0 10.10.10.241:6789/0 67 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; Degraded data redundancy: 2443/7329 objects degraded (33.333%), 128 pgs degraded; mon pve241 is low on available space
2018-11-21 11:39:37.114141 mon.pve241 mon.0 10.10.10.241:6789/0 70 : cluster [WRN] Health check update: 1 osds down (OSD_DOWN)
2018-11-21 11:39:37.114203 mon.pve241 mon.0 10.10.10.241:6789/0 71 : cluster [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (2 osds) down)
2018-11-21 11:39:37.161817 mon.pve241 mon.0 10.10.10.241:6789/0 72 : cluster [INF] osd.4 10.10.10.243:6804/2068 boot
2018-11-21 11:39:38.140934 mon.pve241 mon.0 10.10.10.241:6789/0 76 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-11-21 11:39:38.142279 mon.pve241 mon.0 10.10.10.241:6789/0 77 : cluster [WRN] Health check update: Degraded data redundancy: 2443/7329 objects degraded (33.333%), 128 pgs degraded, 256 pgs undersized (PG_DEGRADED)
2018-11-21 11:39:38.147556 mon.pve241 mon.0 10.10.10.241:6789/0 78 : cluster [INF] osd.5 10.10.10.243:6800/1907 boot
2018-11-21 11:39:40.075837 mon.pve241 mon.0 10.10.10.241:6789/0 81 : cluster [WRN] reached concerning levels of available space on local monitor storage (21% free)
2018-11-21 11:39:44.256138 mon.pve241 mon.0 10.10.10.241:6789/0 88 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 912/7329 objects degraded (12.444%), 48 pgs degraded, 91 pgs undersized)
2018-11-21 12:00:00.000176 mon.pve241 mon.0 10.10.10.241:6789/0 505 : cluster [WRN] overall HEALTH_WARN mon pve241 is low on available space
2018-11-21 12:58:34.931859 mon.pve242 mon.1 10.10.10.242:6789/0 1030 : cluster [WRN] message from mon.0 was stamped 0.050089s in the future, clocks not synchronized
2018-11-21 12:59:36.003885 mon.pve241 mon.0 10.10.10.241:6789/0 1361 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.0520737s > max 0.05s
2018-11-21 12:59:40.777796 mon.pve241 mon.0 10.10.10.241:6789/0 1362 : cluster [WRN] Health check failed: clock skew detected on mon.pve242 (MON_CLOCK_SKEW)
2018-11-21 13:00:00.000191 mon.pve241 mon.0 10.10.10.241:6789/0 1371 : cluster [WRN] overall HEALTH_WARN clock skew detected on mon.pve242; mon pve241 is low on available space
2018-11-21 13:00:06.005106 mon.pve241 mon.0 10.10.10.241:6789/0 1374 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.052572s > max 0.05s
2018-11-21 13:00:40.203750 mon.pve242 mon.1 10.10.10.242:6789/0 1053 : cluster [WRN] message from mon.0 was stamped 0.051901s in the future, clocks not synchronized
2018-11-21 13:01:06.006274 mon.pve241 mon.0 10.10.10.241:6789/0 1385 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.0535985s > max 0.05s
2018-11-21 13:02:36.007438 mon.pve241 mon.0 10.10.10.241:6789/0 1411 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.0552215s > max 0.05s
2018-11-21 13:04:36.008662 mon.pve241 mon.0 10.10.10.241:6789/0 1440 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.057396s > max 0.05s
2018-11-21 13:07:06.009907 mon.pve241 mon.0 10.10.10.241:6789/0 1479 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.0602925s > max 0.05s
2018-11-21 13:10:06.010965 mon.pve241 mon.0 10.10.10.241:6789/0 1521 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.0639492s > max 0.05s
2018-11-21 13:11:05.508035 mon.pve242 mon.1 10.10.10.242:6789/0 1166 : cluster [WRN] message from mon.0 was stamped 0.063312s in the future, clocks not synchronized
2018-11-21 13:13:36.012478 mon.pve241 mon.0 10.10.10.241:6789/0 1576 : cluster [WRN] mon.1 10.10.10.242:6789/0 clock skew 0.0678975s > max 0.05s
Logs

Wie sollte man das angehen ?
 
Hi,

ich hatte das mal auf einem Testcluster.
Da waren aber nicht die OSDs voll sondern eine Partition.

Was sagt denn: ceph osd df?
Oder df -lh?

Hab dann alte Kernel weggeschmissen und es lief wieder. Schau erstmal.
Gruß Thomas
 
ceph osd df
Code:
root@pve241:~# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE VAR  PGS
 0   hdd 0.97609  1.00000 1000GiB 4.50GiB  995GiB 0.45 0.91 128
 1   hdd 0.97609  1.00000 1000GiB 5.40GiB  994GiB 0.54 1.09 128
 2   hdd 0.97609  1.00000 1000GiB 5.03GiB  994GiB 0.50 1.02 134
 3   hdd 0.97609  1.00000 1000GiB 4.87GiB  995GiB 0.49 0.98 122
 4   hdd 0.97609  1.00000 1000GiB 5.44GiB  994GiB 0.54 1.10 133
 5   hdd 0.97609  1.00000 1000GiB 4.46GiB  995GiB 0.45 0.90 123
                    TOTAL 5.86TiB 29.7GiB 5.83TiB 0.50
MIN/MAX VAR: 0.90/1.10  STDDEV: 0.04

Code:
root@pve241:~# df -lh
Filesystem            Size  Used Avail Use% Mounted on
udev                  5.8G     0  5.8G   0% /dev
tmpfs                 1.2G  8.8M  1.2G   1% /run
/dev/mapper/pve-root  7.6G  5.6G  1.7G  77% /
tmpfs                 5.8G   63M  5.7G   2% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/fuse              30M   28K   30M   1% /etc/pve
/dev/sdc1            1000G  4.5G  996G   1% /var/lib/ceph/osd/ceph-0
/dev/sdd1            1000G  5.5G  995G   1% /var/lib/ceph/osd/ceph-1
tmpfs                 1.2G     0  1.2G   0% /run/user/0
 
Das sieht mir eher nach alten rumliegenden Kerneln aus.
Lass Dir mal anzeigen, welche noch installiert sind.
dpkg -l | grep linux-image

lass Dir den aktuellen anzeigen. Der muss erhalten bleiben!
uname -r

Und Du kannst wahrscheinlich alte deinstallieren über:
apt remove <kernel>

Das schafft Platz.
 
Habs jetzt mal durch das löschen von ISO-Files am pve241 Node vorerst gelöst... Wie man sieht ist das rootfs nicht sehr groß, vermutlich wird das Problem wieder auftauchen. Wie pflegt man denn eigentlich den Ceph ?

upload_2018-11-21_15-21-37.png
 
dpkg -l | grep linux-image
Code:
root@pve241:/# dpkg -l | grep pve-kernel-
ii  pve-kernel-4.15                      5.2-12                         all          Latest Proxmox VE Kernel Image
ii  pve-kernel-4.15.17-1-pve             4.15.17-9                      amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-4.15.18-2-pve             4.15.18-21                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-4.15.18-4-pve             4.15.18-23                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-4.15.18-5-pve             4.15.18-24                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-4.15.18-7-pve             4.15.18-27                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-4.15.18-8-pve             4.15.18-28                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-4.15.18-9-pve             4.15.18-30                     amd64        The Proxmox PVE Kernel Image

apt remove <kernel>
Wieviele kernel sollte man drauf lassen ?
 
Hab die letzten drei Kernel drauf gelassen...

Das schaut ja gleich viel besser aus:
Code:
root@pve241:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  5.8G     0  5.8G   0% /dev
tmpfs                 1.2G  8.8M  1.2G   1% /run
/dev/mapper/pve-root  7.6G  2.6G  4.6G  37% /
tmpfs                 5.8G   36M  5.7G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/fuse              30M   28K   30M   1% /etc/pve
/dev/sdd1            1000G  5.4G  995G   1% /var/lib/ceph/osd/ceph-1
tmpfs                 1.2G     0  1.2G   0% /run/user/0
/dev/sdc1            1000G  4.5G  996G   1% /var/lib/ceph/osd/ceph-0

Danke für die Infos !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!