pveperf

a1d3s

Active Member
May 23, 2017
20
0
41
44
Ich habe sehr unterschiedliche Werte auf meinen 5 Nodes
Code:
root@pve1:~# pveperf
CPU BOGOMIPS:      76777.28
REGEX/SECOND:      1064016
HD SIZE:           94.37 GB (/dev/dm-0)
BUFFERED READS:    100.31 MB/sec
AVERAGE SEEK TIME: 11.78 ms
FSYNCS/SECOND:     53.53
DNS EXT:           27.29 ms
DNS INT:           37.39 ms (cen.de)
Code:
root@pve2:~# pveperf
CPU BOGOMIPS:      76777.28
REGEX/SECOND:      1064016
HD SIZE:           94.37 GB (/dev/dm-0)
BUFFERED READS:    100.31 MB/sec
AVERAGE SEEK TIME: 11.78 ms
FSYNCS/SECOND:     53.53
DNS EXT:           27.29 ms
DNS INT:           37.39 ms (cen.de)
Code:
root@pve3:~# pveperf
CPU BOGOMIPS:      11970.98
REGEX/SECOND:      1143133
HD SIZE:           94.37 GB (/dev/dm-0)
BUFFERED READS:    102.82 MB/sec
AVERAGE SEEK TIME: 9.59 ms
FSYNCS/SECOND:     2285.12
DNS EXT:           26.98 ms
DNS INT:           38.67 ms (cen.de)
Code:
root@pve4:~# pveperf
CPU BOGOMIPS:      32000.28
REGEX/SECOND:      770594
HD SIZE:           94.37 GB (/dev/dm-0)
BUFFERED READS:    110.38 MB/sec
AVERAGE SEEK TIME: 9.54 ms
FSYNCS/SECOND:     38.84
DNS EXT:           35.01 ms
DNS INT:           51.21 ms (cen.de)
Code:
root@pve5:~# pveperf
CPU BOGOMIPS:      115202.64
REGEX/SECOND:      996014
HD SIZE:           94.37 GB (/dev/dm-0)
BUFFERED READS:    132.89 MB/sec
AVERAGE SEEK TIME: 9.72 ms
FSYNCS/SECOND:     884.36
DNS EXT:           17.92 ms
DNS INT:           2.53 ms (cen.de)

Was kann man machen das die fsync überall besser werden?
Code:
pveversion --verbose
proxmox-ve: 4.4-96 (running kernel: 4.4.83-1-pve)
pve-manager: 4.4-18 (running version: 4.4-18/ef2610e8)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.83-1-pve: 4.4.83-96
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-53
qemu-server: 4.0-113
pve-firmware: 1.1-11
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.0-5~pve4
pve-container: 1.0-101
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
ceph: 10.2.10-1~bpo80+1
 
Wenn ich aus einem Backup mit qmrestore eine Maschine wiederherstelle hängt sich der ganze Cluster auf.

Code:
2017-11-08 12:00:44.751445 osd.13 10.10.10.3:6808/5393 595 : cluster [WRN] slow request 30.110021 seconds old, received at 2017-11-08 12:00:14.641358: osd_op(client.35438275.1:113120 2.4eab663d rbd_data.2f20cc238e1f29.00000000000006e7 [set-alloc-hint object_size 4194304 write_size 4194304,write 3997696~196608] snapc 0=[] ondisk+write e2538) currently waiting for subops from 10
2017-11-08 12:00:54.311504 mon.0 10.10.10.1:6789/0 3187 : cluster [INF] pgmap v15377024: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42047 GB / 44563 GB avail; 27224 B/s rd, 16415 kB/s wr, 42 op/s
2017-11-08 12:00:50.004004 osd.17 10.10.10.3:6840/10219 913 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.008030 secs
2017-11-08 12:00:50.004010 osd.17 10.10.10.3:6840/10219 914 : cluster [WRN] slow request 30.008030 seconds old, received at 2017-11-08 12:00:19.995921: osd_op(client.32244331.1:7423161 2.bf1ef948 rbd_data.46b8ec238e1f29.0000000000009694 [set-alloc-hint object_size 4194304 write_size 4194304,write 2097152~2097152] snapc 0=[] ondisk+write e2538) currently waiting for subops from 6
2017-11-08 12:00:56.431620 mon.0 10.10.10.1:6789/0 3188 : cluster [INF] pgmap v15377025: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42047 GB / 44563 GB avail; 22648 B/s rd, 20584 kB/s wr, 54 op/s
2017-11-08 12:00:58.311020 mon.0 10.10.10.1:6789/0 3189 : cluster [INF] pgmap v15377026: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42047 GB / 44563 GB avail; 25332 B/s rd, 23950 kB/s wr, 61 op/s
2017-11-08 12:01:00.417593 mon.0 10.10.10.1:6789/0 3190 : cluster [INF] pgmap v15377027: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42047 GB / 44563 GB avail; 33840 B/s rd, 18345 kB/s wr, 108 op/s
2017-11-08 12:01:01.962233 mon.0 10.10.10.1:6789/0 3191 : cluster [INF] pgmap v15377028: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42047 GB / 44563 GB avail; 39829 B/s rd, 20266 kB/s wr, 127 op/s
2017-11-08 12:01:03.987623 mon.0 10.10.10.1:6789/0 3192 : cluster [INF] pgmap v15377029: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 1127 kB/s wr, 3 op/s
2017-11-08 12:00:57.004633 osd.17 10.10.10.3:6840/10219 915 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 30.738898 secs
2017-11-08 12:00:57.004647 osd.17 10.10.10.3:6840/10219 916 : cluster [WRN] slow request 30.738898 seconds old, received at 2017-11-08 12:00:26.265695: osd_op(client.35446285.1:13111 2.f5678681 rbd_data.1ce001238e1f29.0000000000000980 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~2101248] snapc 0=[] ondisk+write e2538) currently waiting for subops from 6
2017-11-08 12:00:57.004651 osd.17 10.10.10.3:6840/10219 917 : cluster [WRN] slow request 30.731186 seconds old, received at 2017-11-08 12:00:26.273406: osd_op(client.35446285.1:13114 2.f5678681 rbd_data.1ce001238e1f29.0000000000000980 [set-alloc-hint object_size 4194304 write_size 4194304,write 2101248~2093056] snapc 0=[] ondisk+write e2538) currently waiting for subops from 6
2017-11-08 12:01:05.792227 mon.0 10.10.10.1:6789/0 3193 : cluster [INF] pgmap v15377030: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 21472 B/s rd, 11062 kB/s wr, 32 op/s
2017-11-08 12:01:08.387704 mon.0 10.10.10.1:6789/0 3194 : cluster [INF] pgmap v15377031: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 15182 B/s rd, 7036 kB/s wr, 22 op/s
2017-11-08 12:01:10.224343 mon.0 10.10.10.1:6789/0 3195 : cluster [INF] pgmap v15377032: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 9451 B/s rd, 21948 kB/s wr, 49 op/s
2017-11-08 12:01:09.837610 osd.5 10.10.10.5:6804/6441 522 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.175451 secs
2017-11-08 12:01:09.837629 osd.5 10.10.10.5:6804/6441 523 : cluster [WRN] slow request 30.175451 seconds old, received at 2017-11-08 12:00:39.662094: osd_op(client.35446285.1:13277 2.29e937b6 rbd_data.1ce001238e1f29.00000000000009c4 [set-alloc-hint object_size 4194304 write_size 4194304,write 2109440~2084864] snapc 0=[] ondisk+write e2538) currently waiting for subops from 6
2017-11-08 12:01:10.583132 osd.0 10.10.10.3:6820/7524 504 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.280926 secs
2017-11-08 12:01:10.583142 osd.0 10.10.10.3:6820/7524 505 : cluster [WRN] slow request 30.280926 seconds old, received at 2017-11-08 12:00:40.302174: osd_op(client.35446285.1:13296 2.59cc2f51 rbd_data.1ce001238e1f29.00000000000009cd [set-alloc-hint object_size 4194304 write_size 4194304,write 2097152~2097152] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:11.847463 mon.0 10.10.10.1:6789/0 3196 : cluster [INF] pgmap v15377033: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 13072 B/s rd, 33932 kB/s wr, 74 op/s
2017-11-08 12:01:13.080491 mon.0 10.10.10.1:6789/0 3197 : cluster [INF] pgmap v15377034: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 4506 kB/s wr, 10 op/s
2017-11-08 12:01:08.018595 osd.19 10.10.10.3:6828/8607 560 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.321048 secs
2017-11-08 12:01:08.018604 osd.19 10.10.10.3:6828/8607 561 : cluster [WRN] slow request 30.321048 seconds old, received at 2017-11-08 12:00:37.697490: osd_op(client.35446285.1:13211 2.54d6cd3f rbd_data.1ce001238e1f29.00000000000009a6 [set-alloc-hint object_size 4194304 write_size 4194304,write 2121728~2072576] snapc 0=[] ondisk+write e2538) currently waiting for subops from 10
2017-11-08 12:01:14.593851 osd.1 10.10.10.3:6816/6838 371 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.486134 secs
2017-11-08 12:01:14.593866 osd.1 10.10.10.3:6816/6838 372 : cluster [WRN] slow request 30.486134 seconds old, received at 2017-11-08 12:00:44.107663: osd_op(client.35446285.1:13376 2.9edac7f0 rbd_data.1ce001238e1f29.00000000000009ef [set-alloc-hint object_size 4194304 write_size 4194304,write 2555904~1638400] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:14.867755 mon.0 10.10.10.1:6789/0 3198 : cluster [INF] pgmap v15377035: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 28618 kB/s wr, 54 op/s
2017-11-08 12:01:16.123759 mon.0 10.10.10.1:6789/0 3199 : cluster [INF] pgmap v15377036: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 29886 kB/s wr, 55 op/s
2017-11-08 12:01:08.005623 osd.17 10.10.10.3:6840/10219 918 : cluster [WRN] 3 slow requests, 2 included below; oldest blocked for > 41.732161 secs
2017-11-08 12:01:08.005634 osd.17 10.10.10.3:6840/10219 919 : cluster [WRN] slow request 30.172267 seconds old, received at 2017-11-08 12:00:37.833300: osd_op(client.35438275.1:113348 2.76478833 rbd_data.7f3448238e1f29.0000000000001106 [set-alloc-hint object_size 4194304 write_size 4194304,write 1556480~16384] snapc 0=[] ondisk+write e2538) currently waiting for subops from 6
2017-11-08 12:01:08.005639 osd.17 10.10.10.3:6840/10219 920 : cluster [WRN] slow request 30.171654 seconds old, received at 2017-11-08 12:00:37.833913: osd_op(client.35438275.1:113349 2.76478833 rbd_data.7f3448238e1f29.0000000000001106 [set-alloc-hint object_size 4194304 write_size 4194304,write 1884160~16384] snapc 0=[] ondisk+write e2538) currently waiting for subops from 6
2017-11-08 12:01:08.601858 osd.16 10.10.10.3:6836/9744 614 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.244355 secs
2017-11-08 12:01:08.601872 osd.16 10.10.10.3:6836/9744 615 : cluster [WRN] slow request 30.244355 seconds old, received at 2017-11-08 12:00:38.357434: osd_op(client.35446285.1:13235 2.3924b82 rbd_data.1ce001238e1f29.00000000000009b1 [set-alloc-hint object_size 4194304 write_size 4194304,write 2129920~2064384] snapc 0=[] ondisk+write e2538) currently waiting for subops from 10
2017-11-08 12:01:18.257618 mon.0 10.10.10.1:6789/0 3200 : cluster [INF] pgmap v15377037: 512 pgs: 512 active+clean; 1262 GB data, 2516 GB used, 42046 GB / 44563 GB avail; 2551 kB/s wr, 6 op/s
2017-11-08 12:01:20.610168 mon.0 10.10.10.1:6789/0 3201 : cluster [INF] pgmap v15377038: 512 pgs: 512 active+clean; 1262 GB data, 2517 GB used, 42046 GB / 44563 GB avail; 2385 B/s rd, 12232 kB/s wr, 34 op/s
2017-11-08 12:01:12.583361 osd.0 10.10.10.3:6820/7524 506 : cluster [WRN] 3 slow requests, 2 included below; oldest blocked for > 32.281126 secs
2017-11-08 12:01:12.583374 osd.0 10.10.10.3:6820/7524 507 : cluster [WRN] slow request 30.203524 seconds old, received at 2017-11-08 12:00:42.379776: osd_op(client.35446285.1:13338 2.d7825760 rbd_data.1ce001238e1f29.00000000000009e2 [set-alloc-hint object_size 4194304 write_size 4194304,write 0~2109440] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:12.583381 osd.0 10.10.10.3:6820/7524 508 : cluster [WRN] slow request 30.190817 seconds old, received at 2017-11-08 12:00:42.392483: osd_op(client.35446285.1:13341 2.d7825760 rbd_data.1ce001238e1f29.00000000000009e2 [set-alloc-hint object_size 4194304 write_size 4194304,write 2109440~2084864] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:15.583673 osd.0 10.10.10.3:6820/7524 509 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 33.203855 secs
2017-11-08 12:01:15.583678 osd.0 10.10.10.3:6820/7524 510 : cluster [WRN] slow request 30.606638 seconds old, received at 2017-11-08 12:00:44.976993: osd_op(client.35438275.1:113497 2.22c6b37d rbd_data.5bbf58238e1f29.0000000000000a3f [set-alloc-hint object_size 4194304 write_size 4194304,write 2101248~4096] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:16.583793 osd.0 10.10.10.3:6820/7524 511 : cluster [WRN] 4 slow requests, 1 included below; oldest blocked for > 34.203975 secs
2017-11-08 12:01:16.583804 osd.0 10.10.10.3:6820/7524 512 : cluster [WRN] slow request 30.931480 seconds old, received at 2017-11-08 12:00:45.652270: osd_op(client.32244331.1:7423474 2.de482b51 rbd_data.30fb8c2ae8944a.0000000000000a7e [set-alloc-hint object_size 4194304 write_size 4194304,write 2580480~4096] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:20.584194 osd.0 10.10.10.3:6820/7524 513 : cluster [WRN] 8 slow requests, 4 included below; oldest blocked for > 38.204361 secs
2017-11-08 12:01:20.584201 osd.0 10.10.10.3:6820/7524 514 : cluster [WRN] slow request 30.455761 seconds old, received at 2017-11-08 12:00:50.128376: osd_op(client.32244331.1:7423514 2.efa72d7d rbd_data.46b8ec238e1f29.000000000000968c [set-alloc-hint object_size 4194304 write_size 4194304,write 0~2097152] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:20.584205 osd.0 10.10.10.3:6820/7524 515 : cluster [WRN] slow request 30.414605 seconds old, received at 2017-11-08 12:00:50.169531: osd_op(client.32244331.1:7423517 2.efa72d7d rbd_data.46b8ec238e1f29.000000000000968c [set-alloc-hint object_size 4194304 write_size 4194304,write 2097152~2097152] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:20.584209 osd.0 10.10.10.3:6820/7524 516 : cluster [WRN] slow request 30.243897 seconds old, received at 2017-11-08 12:00:50.340240: osd_op(client.35446285.1:13405 2.23c6b151 rbd_data.1ce001238e1f29.00000000000009fd [set-alloc-hint object_size 4194304 write_size 4194304,write 0~3145728] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:20.584216 osd.0 10.10.10.3:6820/7524 517 : cluster [WRN] slow request 30.224683 seconds old, received at 2017-11-08 12:00:50.359454: osd_op(client.35446285.1:13408 2.23c6b151 rbd_data.1ce001238e1f29.00000000000009fd [set-alloc-hint object_size 4194304 write_size 4194304,write 3145728~1048576] snapc 0=[] ondisk+write e2538) currently waiting for subops from 4
2017-11-08 12:01:22.274933 mon.0 10.10.10.1:6789/0 3202 : cluster [INF] pgmap v15377039: 512 pgs: 512 active+clean; 1262 GB data, 2517 GB used, 42046 GB / 44563 GB avail; 2047 B/s rd, 13311 kB/s wr, 30 op/s
2017-11-08 12:01:23.741813 mon.0 10.10.10.1:6789/0 3203 : cluster [INF] pgmap v15377040: 512 pgs: 512 active+clean; 1262 GB data, 2517 GB used, 42046 GB / 44563 GB avail; 4893 kB/s wr, 5 op/s

Im Anhang die Meldung welche ich dann auf dem VMs bekomme.
 

Attachments

  • prox.PNG
    prox.PNG
    7.4 KB · Views: 10
Code:
ceph -s
    cluster 393e3182-c345-4d2b-a746-42999849e3e3
     health HEALTH_OK
     monmap e5: 5 mons at {0=10.10.10.1:6789/0,1=10.10.10.2:6789/0,2=10.10.10.3:6789/0,3=10.10.10.4:6789/0,pve5=10.10.10.5:6789/0}
            election epoch 710, quorum 0,1,2,3,4 0,1,2,3,pve5
     osdmap e2538: 24 osds: 24 up, 24 in
            flags sortbitwise,require_jewel_osds
      pgmap v15379059: 512 pgs, 1 pools, 1274 GB data, 324 kobjects
            2539 GB used, 42024 GB / 44563 GB avail
                 512 active+clean
  client io 173 kB/s rd, 1081 kB/s wr, 5 op/s rd, 132 op/s wr
 
Dein Rootfs ist unter starker Last.
Ich nehme an das die Monitore die Last erzeugen?
 
atop zum beispiel oder iotop.
 
Du machst ein Backup
KVM oder LXC?
Was für ein Storage ist die Source und was ist das Target?
 
Last edited:
Was man schon sieht ist das dein rootfs mit 10% ausgelastet ist.
Das ist nicht schlimm aber deutet halt schon drauf hin das deine Festplatte nicht nach kommt.
Wie sieht den dein Setup genau aus HW, Network,....
 
Hier mal die pvereport mit lspci dazu.
pve2=pve1 , bis auf 16GB Ram sind die identisch.
report.txt ist von meinem node5(pve5)
 

Attachments

  • report.txt
    32.4 KB · Views: 9
  • reportpve2.txt
    45 KB · Views: 4
  • reportpve3.txt
    29.9 KB · Views: 2
  • reportpve4.txt
    26.4 KB · Views: 2
You are using Raid controller what explain this behavior.
The osd must have the complete control over the disk.
 
Das Problem kam durch krdb in ceph und das auf den HP Servern Cache ohne Batterie eingebaut war.
Hatte es mit nocachebatterie=enabled getestet und fsync von 3000 erhalten.
Jetzt habe ich Cache mit Batterie eingebaut und es bleibt stabil.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!