Ceph VM backup and restore on PVE 4.1 very slow

jandoe88

New Member
Mar 16, 2016
4
1
3
31
Hello. Hope someone can help me. Backup and restore in PVE 4.1 is performing very bad on my ceph-cluster (hammer). Other nodes can write around 80mb/s to ceph. Restore from local storage or nfs to ceph max 18 mb/s and backup to local storage max 40 mb/s. Local storage i/o around 100mb/s.
VMs on ceph storage can write 40mb/s with activated writeback-cache, otherwise 18mb/s.
Does anybody have an idea why it is performing so bad?

Thanks in advance!
Jan

pveversion -v
proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-15 (running version: 4.1-15/8cd55b52)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-39
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-33
qemu-server: 4.0-62
pve-firmware: 1.1-7
libpve-common-perl: 4.0-49
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-42
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-9
pve-container: 1.0-46
pve-firewall: 2.0-18
pve-ha-manager: 1.0-24
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie
 
  • Like
Reactions: KHosting

jandoe88

New Member
Mar 16, 2016
4
1
3
31
Made a few tests, found new details:
If i mount Ceph Cluster Hammer on PVE4.1 with builtin Ceph i get a acceptable througput of 90 mb /s.
If i mount Ceph Cluster Infernalis on PVE4.1 with builtin Ceph i get a connection error.
If i mount Ceph Cluster Hammer on PVE4.1 with Ceph Hammer (pveceph install -version hammer) i get a inacceptable througput of 20 mb /s.
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,323
354
103
Hi,
If i mount Ceph Cluster Hammer on PVE4.1 with builtin Ceph i get a acceptable througput of 90 mb /s.
If i mount Ceph Cluster Infernalis on PVE4.1 with builtin Ceph i get a connection error.
If i mount Ceph Cluster Hammer on PVE4.1 with Ceph Hammer (pveceph install -version hammer) i get a inacceptable througput of 20 mb /s.
did you compare the crush map from this 2 installation?
 

jandoe88

New Member
Mar 16, 2016
4
1
3
31
Hi Wolfang. I think you speak german? Danke für deine Antwort. Die Crushmap zu vergleichen bringt mir leider nichts. Also ich erklärs nochmal auf deutsch, dann kann versteht man es vllt besser:

Ich habe zwei Ceph-Testcluster:
Der erste ist manuell installiert, als infernalis cluster mit 3 Nodes auf Debian 8.
Der andere setzt auf pve4.1 mit ceph hammer (pveceph install -version hammer) und ebenfalls 3 Nodes.

Ich habe außerdem einen Proxmox PVE 4.1 Node auf dem diverse VMs laufen. Wenn ich dort nun den Ceph-Hammer-Storage über RBD anbinden möchte bekomme ich einen Durchsatz von maximal 100MB/s, das ist gut. Wenn ich den Ceph-Infernalis-Storage anbinden möchte kommt ein Connection Error. Das liegt vmtl daran dass die Ceph-Version (Version: 0.80.7-2+deb8u1 ) auf dem Proxmox Node zu alt ist.
Wenn ich jetzt auf dem Proxmox-Node das Ceph-update auf Hammer einspiele (pveceph install -version hammer) kann ich beide Ceph-Cluster über RBD anbinden. Nun bekomme ich aber bei beiden nurnoch einen Durchsatz von maximal 20MB/s. Woran kann das liegen?

Gruß Jan
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,323
354
103
This is strange because this packets comes direct from ceph.
We do nothing on this.
I will make some test tomorrow and tell you if I found something.

Do you try to install infernalis on pve.
 
Apr 30, 2012
35
1
8
Munich
Hi,

I have the same problem here.

4 Node PVE 4.1 cluster with 3 Ceph nodes and one node for the VMs.
All nodes have ceph hammer (0.94.6-1~bpo80+1) installed.

The restore of a 720GB VM took 8 hours.
After downgrading ceph on the VM node to firefly (0.80.7-2+deb8u1) just under 2.

I can't see a notable performance difference between the two versions within the VM.



Kind regards
Frank
 
While I've not tested other versions I am in a similar situation in way of speed w/ running 4.1.

Ceph is riding on a bonded 10G network w/ 2x10G on each host node and 4x10G on each ceph node.

The NFS share is a RAID 6 riding on the same network hardware but using a different ip range.

VM system performance seems fair.

Backup of VM running on Ceph FS to Local Drive on host node
INFO: creating archive '/home/backups//dump/vzdump-qemu-100-2016_04_06-21_00_01.vma.lzo'
INFO: started backup task '9533c73c-bbcd-43f9-87bb-866223afe0c0'
INFO: status: 0% (127205376/34359738368), sparse 0% (6217728), duration 3, 42/40 MB/s
INFO: status: 1% (353501184/34359738368), sparse 0% (143503360), duration 9, 37/14 MB/s
INFO: status: 2% (722337792/34359738368), sparse 1% (353927168), duration 17, 46/19 MB/s
INFO: status: 3% (1034551296/34359738368), sparse 1% (504332288), duration 24, 44/23 MB/s
INFO: status: 4% (1402142720/34359738368), sparse 1% (568889344), duration 33, 40/33 MB/s
INFO: status: 5% (1748500480/34359738368), sparse 1% (595742720), duration 42, 38/35 MB/s
INFO: status: 6% (2068971520/34359738368), sparse 1% (600281088), duration 53, 29/28 MB/s
INFO: status: 7% (2426535936/34359738368), sparse 1% (601096192), duration 62, 39/39 MB/s
INFO: status: 8% (2773483520/34359738368), sparse 1% (601735168), duration 71, 38/38 MB/s
INFO: status: 9% (3092840448/34359738368), sparse 1% (605396992), duration 79, 39/39 MB/s

Backup of VM running on NFS Mount to Local Drive on host node
INFO: creating archive '/home/backups//dump/vzdump-qemu-105-2016_04_06-21_14_26.vma.lzo'
INFO: started backup task 'a479246d-0679-4ded-a72d-8ad5593bbe82'
INFO: status: 0% (916848640/506806140928), sparse 0% (151097344), duration 3, 305/255 MB/s
INFO: status: 1% (5180882944/506806140928), sparse 0% (502284288), duration 32, 147/134 MB/s
INFO: status: 2% (10228727808/506806140928), sparse 0% (778022912), duration 79, 107/101 MB/s
INFO: status: 3% (15314583552/506806140928), sparse 0% (1188638720), duration 112, 154/141 MB/s
INFO: status: 4% (20350369792/506806140928), sparse 0% (1465102336), duration 146, 148/139 MB/s
INFO: status: 5% (25384779776/506806140928), sparse 0% (1734115328), duration 202, 89/85 MB/s
INFO: status: 6% (30518935552/506806140928), sparse 0% (2144948224), duration 260, 88/81 MB/s
INFO: status: 7% (35528769536/506806140928), sparse 0% (2421731328), duration 324, 78/73 MB/s
INFO: status: 8% (40550006784/506806140928), sparse 0% (2734796800), duration 398, 67/63 MB/s
INFO: status: 9% (45688553472/506806140928), sparse 0% (3035164672), duration 435, 138/130 MB/s

Backup of VM running on local drive to Local Drive on host node
Running as unit 101.scope.
INFO: started backup task '694f787a-85c9-4fc7-975d-19fb698ebc7f'
INFO: status: 3% (1090912256/34359738368), sparse 1% (524144640), duration 3, 363/188 MB/s
INFO: status: 4% (1639579648/34359738368), sparse 1% (633503744), duration 6, 182/146 MB/s
INFO: status: 6% (2155216896/34359738368), sparse 1% (638238720), duration 9, 171/170 MB/s
INFO: status: 7% (2683568128/34359738368), sparse 1% (639234048), duration 12, 176/175 MB/s
INFO: status: 9% (3134717952/34359738368), sparse 2% (759263232), duration 15, 150/110 MB/s
INFO: status: 10% (3751673856/34359738368), sparse 2% (817344512), duration 18, 205/186 MB/s
 
Dec 19, 2012
273
7
38
Hi.
Same here -- I just created a cluster of two nodes ( https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster )
Afterwards I wanted to restore a VM on one of the nodes. It took hours ...

Here is the log-file:
Code:
restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-108-2016_04_18-11_51_47.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp10785.fifo - /var/tmp/vzdumptmp10785
CFG: size: 519 name: qemu-server.conf
DEV: dev_id=1 size: 160055754752 devname: drive-ide0
CTIME: Mon Apr 18 11:51:50 2016
Formatting '/var/lib/vz/images/100/vm-100-disk-1.vmdk', fmt=vmdk size=160055754752 compat6=off
libust[10790/10790]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)
new volume ID is 'local:100/vm-100-disk-1.vmdk'
map 'drive-ide0' to '/var/lib/vz/images/100/vm-100-disk-1.vmdk' (write zeros = 0)
libust[10788/10788]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)
After that it's VERY slow ... ~4000 s für 1% .... that's not normal. What to do?
Code:
I terminated it finally:

progress 1% (read 1600585728 bytes, duration 1851 sec)
progress 2% (read 3201171456 bytes, duration 4001 sec)
temporary volume 'local:100/vm-100-disk-1.vmdk' sucessfuly removed
TASK ERROR: command 'lzop -d -c /var/lib/vz/dump/vzdump-qemu-108-2016_04_18-11_51_47.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp4925.fifo - /var/tmp/vzdumptmp4925' failed: interrupted by signal
Further information:
Code:
pveversion -v
proxmox-ve: 4.1-45 (running kernel: 4.4.6-1-pve)
pve-manager: 4.1-30 (running version: 4.1-30/9e199213)
pve-kernel-4.4.6-1-pve: 4.4.6-45
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-41
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-69
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-13
pve-container: 1.0-59
pve-firewall: 2.0-24
pve-ha-manager: 1.0-27
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
fence-agents-pve: 4.0.20-1
One difference: "fence-agents-pve: 4.0.20-1" runs only on the second node -- but not on the first node where I created the cluster. Correct like this?

Status is ok!
Code:
Quorum information
------------------
Date:  Mon Apr 18 15:57:05 2016
Quorum provider:  corosync_votequorum
Nodes:  2
Node ID:  0x00000002
Ring ID:  52
Quorate:  Yes

Votequorum information
----------------------
Expected votes:  2
Highest expected: 2
Total votes:  2
Quorum:  2
Flags:  Quorate
[later ...] Sorry -- probably wrong thread. My problem has nothing to do with ceph ... I just installed
a 2-node-cluster on 4.1. So: New thread or leave it here?
 
Last edited:

gosha

Active Member
Oct 20, 2014
275
18
38
Russia
Hi!

The same...

Very low backup speed:
5 nodes (24 OSD), PVE 4.1-22, ceph version 0.94.6 (through 10Gbit/s network),

Code:
ceph status
    cluster 820f952d-a3ef-44aa-b2d4-95ac9747173e
     health HEALTH_OK
     monmap e9: 5 mons at {0=192.168.110.1:6789/0,1=192.168.110.2:6789/0,2=192.168.110.3:6789/0,3=192.168.110.4:6789/0,4=192.168.110.5:6789/0}
            election epoch 3326, quorum 0,1,2,3,4 0,1,2,3,4
     osdmap e27430: 24 osds: 24 up, 24 in
      pgmap v22735852: 1024 pgs, 1 pools, 3039 GB data, 763 kobjects
            9127 GB used, 13097 GB / 22224 GB avail
                1024 active+clean
  client io 40792 kB/s rd, 78827 kB/s wr, 423 op/s

VM running on ceph -> backup via NFS (through 10Gbit/s network) to other running VM on ceph on other node:

Code:
INFO: starting new backup job: vzdump 107 --compress lzo --remove 0 --storage NFS_STOR --mode stop --node cn2
INFO: Starting Backup of VM 107 (qemu)
INFO: status = running
INFO: update VM 107: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: stopping vm
INFO: creating archive '/mnt/pve/NFS_STOR/dump/vzdump-qemu-107-2016_04_18-19_06_12.vma.lzo'
INFO: starting kvm to execute backup task
Running as unit 107.scope.
INFO: started backup task 'dc2d6c9c-bc1f-4191-a443-e14609c3f6e2'
INFO: resume VM
INFO: status: 0% (92274688/1288490188800), sparse 0% (8982528), duration 3, 30/27 MB/s
INFO: status: 1% (12885491712/1288490188800), sparse 0% (6628925440), duration 263, 49/23 MB/s
INFO: status: 2% (25794183168/1288490188800), sparse 0% (12262682624), duration 541, 46/26 MB/s
INFO: status: 3% (38687473664/1288490188800), sparse 1% (13098127360), duration 1007, 27/25 MB/s
INFO: status: 4% (51553894400/1288490188800), sparse 1% (13945896960), duration 1458, 28/26 MB/s
INFO: status: 5% (64438796288/1288490188800), sparse 1% (14791360512), duration 1895, 29/27 MB/s
INFO: status: 6% (77315309568/1288490188800), sparse 1% (15800565760), duration 2359, 27/25 MB/s
INFO: status: 7% (90207420416/1288490188800), sparse 1% (16602714112), duration 2814, 28/26 MB/s
INFO: status: 8% (103088652288/1288490188800), sparse 1% (17519132672), duration 3256, 29/27 MB/s
INFO: status: 9% (115990331392/1288490188800), sparse 1% (18466279424), duration 3716, 28/25 MB/s
INFO: status: 10% (128853082112/1288490188800), sparse 1% (19289677824), duration 4181, 27/25 MB/s
On previous version ceph the same backup - about 90 MB/s.
:(

--
Best regards!
Gosha
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,653
563
133

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,653
563
133
the patches were included in pve-qemu-kvm >= 2.6.1-1, which is available in both pve-no-subscription and pve-enterprise (the latter since yesterday).
 

iva-a-an

New Member
Apr 11, 2017
11
1
3
33
Hi there!

Could anybody point me to a way how to define the bottle neck of performance issue durringa rollback from a snapchot, please?
I'm using ceph as rbd storage and I make snaphots via proxmox interface.

pveversion
pve-manager/4.4-13/7ea56165 (running kernel: 4.4.59-1-pve)

ceph version
ceph version 10.2.7

Here is some details from the cluster below:

################ CEPH ##################
ceph -s
cluster ae366377-d1fe-4550-99ce-474debcb2491
health HEALTH_OK
monmap e3: 3 mons at {0=10.1.1.1:6789/0,1=10.1.1.2:6789/0,2=10.1.1.3:6789/0}
election epoch 50, quorum 0,1,2 0,1,2
osdmap e1161: 15 osds: 15 up, 15 in
flags sortbitwise,require_jewel_osds
pgmap v1217269: 1024 pgs, 1 pools, 1306 GB data, 382 kobjects
4531 GB used, 4406 GB / 8937 GB avail
1024 active+clean
client io 52893 B/s rd, 10606 kB/s wr, 12 op/s rd, 17 op/s wr


ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 8.72791 root default
-2 2.72743 host pve1
1 0.54549 osd.1 up 1.00000 1.00000
2 0.54549 osd.2 up 1.00000 1.00000
3 0.54549 osd.3 up 1.00000 1.00000
4 0.54549 osd.4 up 1.00000 1.00000
0 0.54549 osd.0 up 1.00000 1.00000
-3 3.27304 host pve2
5 0.54549 osd.5 up 1.00000 1.00000
7 1.09109 osd.7 up 1.00000 1.00000
9 0.54549 osd.9 up 1.00000 1.00000
11 0.54549 osd.11 up 1.00000 1.00000
13 0.54549 osd.13 up 1.00000 1.00000
-4 2.72743 host pve3
8 0.54549 osd.8 up 1.00000 1.00000
10 0.54549 osd.10 up 1.00000 1.00000
12 0.54549 osd.12 up 1.00000 1.00000
14 0.54549 osd.14 up 1.00000 1.00000
6 0.54549 osd.6 up 1.00000 1.00000



#######################################PVEPERF ########################################

for id in {6,8,10,12,14}; do echo "osd-"$id; pveperf /var/lib/ceph/osd/ceph-$id; echo "======================="; done
osd-6

CPU BOGOMIPS: 128004.24
REGEX/SECOND: 1112345
HD SIZE: 558.61 GB (/dev/sdc1)
BUFFERED READS: 125.18 MB/sec
AVERAGE SEEK TIME: 7.87 ms
FSYNCS/SECOND: 5505.97
DNS EXT: 181.07 ms
DNS INT: 163.58 ms
=======================
osd-8
CPU BOGOMIPS: 128004.24
REGEX/SECOND: 1138294
HD SIZE: 558.61 GB (/dev/sdd1)
BUFFERED READS: 155.83 MB/sec
AVERAGE SEEK TIME: 8.13 ms
FSYNCS/SECOND: 5507.22
DNS EXT: 200.15 ms
DNS INT: 163.01 ms
=======================
osd-10
CPU BOGOMIPS: 128004.24
REGEX/SECOND: 1148693
HD SIZE: 558.61 GB (/dev/sde1)
BUFFERED READS: 127.77 MB/sec
AVERAGE SEEK TIME: 9.36 ms
FSYNCS/SECOND: 5402.82
DNS EXT: 188.26 ms
DNS INT: 162.46 ms
=======================
osd-12
CPU BOGOMIPS: 128004.24
REGEX/SECOND: 1007628
HD SIZE: 558.61 GB (/dev/sdf1)
BUFFERED READS: 130.19 MB/sec
AVERAGE SEEK TIME: 11.57 ms
FSYNCS/SECOND: 6059.89
DNS EXT: 199.52 ms
DNS INT: 163.01 ms
=======================
osd-14
CPU BOGOMIPS: 128004.24
REGEX/SECOND: 1217641
HD SIZE: 558.61 GB (/dev/sdg1)
BUFFERED READS: 148.83 MB/sec
AVERAGE SEEK TIME: 7.22 ms
FSYNCS/SECOND: 6247.26
DNS EXT: 200.59 ms
DNS INT: 162.25 ms
=======================



######################### IOSTAT ###########################
iostat -xm 5
Linux 4.4.59-1-pve (pve3) 06/06/2017 _x86_64_ (24 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
1.09 0.00 0.56 0.33 0.00 98.03

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 1.45 15.99 4.69 12.79 0.02 0.46 55.96 0.01 0.77 0.65 0.81 0.33 0.57
sdb 0.00 0.31 5.20 20.68 0.02 1.93 154.19 0.05 1.74 0.09 2.16 0.31 0.81
sdc 0.03 1.02 5.91 5.57 0.53 0.47 178.32 0.15 13.01 21.26 4.27 1.59 1.82
sdd 0.03 1.04 6.77 5.81 0.61 0.49 178.75 0.13 10.55 17.97 1.89 1.23 1.55
sde 0.03 1.05 6.32 5.61 0.56 0.46 174.76 0.14 12.05 21.10 1.84 1.53 1.83
sdf 0.03 1.00 6.29 5.56 0.55 0.47 175.13 0.13 11.25 19.09 2.38 1.54 1.82
sdg 0.03 1.04 6.65 5.41 0.59 0.49 182.65 0.13 10.76 17.18 2.86 1.27 1.53
dm-0 0.00 0.00 0.11 26.43 0.00 0.45 34.99 0.01 0.31 7.97 0.28 0.14 0.37
dm-1 0.00 0.00 1.81 1.96 0.01 0.01 8.00 1.41 375.09 1.33 719.84 0.25 0.09
dm-2 0.00 0.00 0.02 0.40 0.00 0.00 8.00 0.00 0.08 0.61 0.06 0.09 0.00

avg-cpu: %user %nice %system %iowait %steal %idle
2.76 0.00 1.00 4.54 0.00 91.70

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 17.80 8.40 12.60 0.02 0.19 19.68 0.01 0.34 0.67 0.13 0.34 0.72
sdb 0.00 0.20 10.40 56.60 0.04 5.75 177.15 0.09 1.29 0.08 1.51 0.35 2.32
sdc 0.00 0.20 3.20 5.60 0.03 0.31 79.30 0.02 1.91 2.50 1.57 1.91 1.68
sdd 0.00 1.20 4.20 3.80 0.01 1.07 276.43 0.03 3.30 3.62 2.95 2.80 2.24
sde 6.80 0.40 404.00 7.00 13.84 1.08 74.35 35.87 88.15 89.63 2.74 2.42 99.44
sdf 0.00 0.80 5.60 5.40 0.02 1.61 304.31 0.04 3.49 4.14 2.81 2.69 2.96
sdg 0.00 0.60 4.00 9.00 0.01 1.54 244.68 0.25 19.02 2.20 26.49 1.72 2.24
dm-0 0.00 0.00 0.00 29.80 0.00 0.18 12.62 0.00 0.05 0.00 0.05 0.05 0.16
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.00 0.60 0.00 0.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00

avg-cpu: %user %nice %system %iowait %steal %idle
2.72 0.00 0.85 3.60 0.00 92.83

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 18.20 0.00 12.80 0.00 0.18 29.50 0.00 0.12 0.00 0.12 0.12 0.16
sdb 0.00 1.60 0.00 42.00 0.00 9.89 482.06 0.18 4.27 0.00 4.27 0.80 3.36
sdc 0.00 0.00 1.80 3.80 0.01 2.43 889.93 0.03 5.29 6.22 4.84 2.43 1.36
sdd 0.00 0.20 0.60 3.00 0.00 0.18 101.72 0.00 0.89 5.33 0.00 0.89 0.32
sde 3.00 0.20 327.80 2.20 31.91 1.05 204.52 20.44 61.93 62.33 3.64 3.00 98.88
sdf 0.00 0.00 1.80 5.60 0.01 2.81 779.00 0.03 4.43 7.11 3.57 2.05 1.52
sdg 0.00 0.00 1.00 3.40 0.00 2.22 1034.23 0.02 5.27 7.20 4.71 2.18 0.96
dm-0 0.00 0.00 0.00 30.80 0.00 0.18 12.21 0.00 0.16 0.00 0.16 0.05 0.16
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.00 0.20 0.00 0.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00


Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!