Extremely slow PVE - Ceph - PBS datacenter. Looking for advice.

Feb 8, 2022
10
0
1
36
Hello everyone, I have a datacenter composed as follows:

2 x HPE ProLiant DL360 Gen8
1 x HPE ProLiant DL180 Gen10

1 ssd disk (870 EVO) for the system on each server

I have configured ceph with 2 pools:
SSD where the VMs are hosted
HDD where the storage is hosted

The 2 pools are composed as follows:
27 x HDD enterprise 3.5 "(Exos or WD Gold) from 12 to 18TB
3 x Samsung Pm1735 6.4TB SSD

Networking side for each server I have:
1 x 10Gbe LAN side
1 x 10Gbe Ceph monitor
2 x 40Gbe configured in OVS Ceph cluster (mesh)

Proxmox Backup Server is configured as a virtual machine on Qnap NAS
- TS-h2477XU-RP - storage ZFS
- 1 x 10Gbe LAN side

Data transfer (even within virtual machines) is particularly slow (40-60MB / s), same thing for backup and restore.


My need is to have a secure production environment that allows me relatively fast backups and restores.
Is my situation consistent with the hardware or is there something wrong?
How can I dramatically improve performance?
I am attaching some screenshots ... thanks!
 

Attachments

  • 2022-05-11 08_39_46-soserver1 - Proxmox Virtual Environment.jpg
    2022-05-11 08_39_46-soserver1 - Proxmox Virtual Environment.jpg
    264.5 KB · Views: 27
  • 2022-05-10 19_15_55-pbs1 - Proxmox Backup Server.jpg
    2022-05-10 19_15_55-pbs1 - Proxmox Backup Server.jpg
    159.2 KB · Views: 28
You must start benchmarking every part of the system on its own and find which one is causing the bottleneck. Leave that Qnap aside until you get the expected performance of the cluster itself.

- Start by checking network performance using tools like iperf. Test every vlan and networks involved in cluster communications and CEPH.
- Then benchmark *each* drive on its own using dd, fio, bonnie. All should give you the similar performance. If any one(s) give you lower than expected performance will affect the whole pool. SSDs
- Then benchmark CEPH using rados bench. Create a benchmarking pool for this using the same OSD your production pools do.

If you are getting proper numbers up to this point, then benchmark from VMs network, disk and CPU.
 
iperf gets expected results, even the single osd obtain the expected results.
What commands can I run in benchmarks that are not pool or data destructive?

These 2 pool test results (I hope they are suitable)
Thanks!!


Code:
root@soserver2:~# fio -ioengine=rbd -direct=1 -name=test -bs=4M -iodepth=16 -rw=write -pool=ceph_hdd_storage -runtime=60 -rbdname=vm-210-disk-1
test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=rbd, iodepth=16
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=664MiB/s][w=166 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=997521: Thu Mar 24 13:26:21 2022
  write: IOPS=183, BW=733MiB/s (769MB/s)(43.1GiB/60199msec); 0 zone resets
    slat (usec): min=1124, max=12176, avg=1822.57, stdev=523.21
    clat (msec): min=12, max=872, avg=85.35, stdev=93.08
     lat (msec): min=13, max=874, avg=87.18, stdev=93.06
    clat percentiles (msec):
     |  1.00th=[   14],  5.00th=[   16], 10.00th=[   18], 20.00th=[   21],
     | 30.00th=[   26], 40.00th=[   33], 50.00th=[   44], 60.00th=[   64],
     | 70.00th=[  102], 80.00th=[  140], 90.00th=[  209], 95.00th=[  275],
     | 99.00th=[  439], 99.50th=[  493], 99.90th=[  617], 99.95th=[  760],
     | 99.99th=[  844]
   bw (  KiB/s): min=278528, max=1196032, per=100.00%, avg=752466.92, stdev=163590.99, samples=120
   iops        : min=   68, max=  292, avg=183.70, stdev=39.94, samples=120
  lat (msec)   : 20=17.71%, 50=36.81%, 100=15.31%, 250=23.49%, 500=6.21%
  lat (msec)   : 750=0.41%, 1000=0.05%
  cpu          : usr=32.90%, sys=0.91%, ctx=7100, majf=0, minf=210174
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,11037,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
  WRITE: bw=733MiB/s (769MB/s), 733MiB/s-733MiB/s (769MB/s-769MB/s), io=43.1GiB (46.3GB), run=60199-60199msec

Disk stats (read/write):
    dm-1: ios=1/1511, merge=0/0, ticks=4/216, in_queue=220, util=1.04%, aggrios=29/1142, aggrmerge=0/731, aggrticks=7/666, aggrin_queue=672, aggrutil=1.23%
  sda: ios=29/1142, merge=0/731, ticks=7/666, in_queue=672, util=1.23%

Code:
root@soserver2:~# fio -ioengine=rbd -direct=1 -name=test -bs=4M -iodepth=16 -rw=write -pool=ceph_ssd_storage -runtime=60 -rbdname=vm-201-disk-0

test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=rbd, iodepth=16

fio-3.25

Starting 1 process

Jobs: 1 (f=1): [W(1)][100.0%][w=957MiB/s][w=239 IOPS][eta 00m:00s]

test: (groupid=0, jobs=1): err= 0: pid=998956: Thu Mar 24 13:27:46 2022

  write: IOPS=229, BW=919MiB/s (964MB/s)(53.9GiB/60056msec); 0 zone resets

    slat (usec): min=1345, max=11061, avg=1914.70, stdev=484.24

    clat (msec): min=16, max=243, avg=67.65, stdev= 9.09

     lat (msec): min=19, max=245, avg=69.56, stdev= 9.14

    clat percentiles (msec):

     |  1.00th=[   50],  5.00th=[   55], 10.00th=[   58], 20.00th=[   61],

     | 30.00th=[   63], 40.00th=[   65], 50.00th=[   68], 60.00th=[   70],

     | 70.00th=[   72], 80.00th=[   75], 90.00th=[   80], 95.00th=[   83],

     | 99.00th=[   91], 99.50th=[   95], 99.90th=[  104], 99.95th=[  124],

     | 99.99th=[  213]

   bw (  KiB/s): min=770048, max=1097728, per=99.98%, avg=941260.80, stdev=68674.00, samples=120

   iops        : min=  188, max=  268, avg=229.80, stdev=16.77, samples=120

  lat (msec)   : 20=0.04%, 50=1.13%, 100=98.66%, 250=0.17%

  cpu          : usr=43.43%, sys=1.07%, ctx=9335, majf=0, minf=305600

  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%

     issued rwts: total=0,13803,0,0 short=0,0,0,0 dropped=0,0,0,0

     latency   : target=0, window=0, percentile=100.00%, depth=16



Run status group 0 (all jobs):

  WRITE: bw=919MiB/s (964MB/s), 919MiB/s-919MiB/s (964MB/s-964MB/s), io=53.9GiB (57.9GB), run=60056-60056msec



Disk stats (read/write):

    dm-1: ios=0/1190, merge=0/0, ticks=0/48, in_queue=48, util=0.89%, aggrios=24/623, aggrmerge=0/571, aggrticks=2/41, aggrin_queue=42, aggrutil=1.05%

  sda: ios=24/623, merge=0/571, ticks=2/41, in_queue=42, util=1.05%
 
Need some more information to help..

What do you have setup pool wise on the disks? Replication? Erasure Coding?
How are you mounting the storage to the backup server?
Is ceph in a fully healthy state?
If you do tests locally on a box running Proxmox/CEPH what speeds do you get then?
 
I recently started this adventure with Proxmox and Ceph and I'm not sure I understand what is asked of me ... I am attaching the configuration screens, if the requested information is not present, would you be kind enough to give me more specific instructions?
It would be nice to make a document with a checklist for new users (I didn't find anything like this :)

The state of health of the ceph is ok, the only thing I have reconfigured the PGs and for a few days it has been telling me that there are 2 hours left
2022-05-13 09_01_57-soserver2 - Proxmox Virtual Environment.jpg2022-05-13 09_01_51-soserver2 - Proxmox Virtual Environment.jpg2022-05-13 09_01_46-soserver2 - Proxmox Virtual Environment.jpg2022-05-13 09_02_25-soserver2 - Proxmox Virtual Environment.jpg2022-05-13 09_06_50-pbs1 - Proxmox Backup Server.jpg2022-05-13 09_05_53-NAS1DATACENTER.jpg2022-05-13 09_05_10-NAS1DATACENTER.jpg


If you do tests locally on a box running Proxmox / CEPH what speeds do you get then?
How can I perform this test? Thanks

p.s. I have the license on all ProxmoxVE machines
 
I tried to create a new test PBS, on 10GB network, with Threadripper processor, 64GB of ram and 3 PM1735 SSDs in raidZ, benchmark performance is very similar to QNAP on HDD .. so I guess the problem is essentially on the ceph

Ironically, the slower hdds are those of the new ProLiant DL180 Gen10 server with disk passthrough (the others are single raid0)

Code:
root@soserver2:~# ceph tell osd.* bench
osd.0: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 0.40591237800000002,
    "bytes_per_sec": 2645255188.5471201,
    "iops": 630.67798341444018
}
osd.1: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.300498111,
    "bytes_per_sec": 466743188.73196417,
    "iops": 111.28024786280731
}
osd.2: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.9320944980000001,
    "bytes_per_sec": 366203007.69037491,
    "iops": 87.309600756257751
}
osd.3: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.7674994160000002,
    "bytes_per_sec": 387982674.10366094,
    "iops": 92.502277875819431
}
osd.4: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 3.6329952040000002,
    "bytes_per_sec": 295552777.72395319,
    "iops": 70.465273314464852
}
osd.5: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.8310698799999998,
    "bytes_per_sec": 379270689.00185537,
    "iops": 90.425178766692966
}
osd.6: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 0.53439537999999998,
    "bytes_per_sec": 2009264795.6649625,
    "iops": 479.04605762123168
}
osd.7: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.3717468460000002,
    "bytes_per_sec": 452721936.07462263,
    "iops": 107.93732072701994
}
osd.8: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.52559187,
    "bytes_per_sec": 425144631.1473912,
    "iops": 101.3623788708189
}
osd.9: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.6728767659999999,
    "bytes_per_sec": 401717669.01429981,
    "iops": 95.776955846381142
}
osd.10: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 3.4988677570000002,
    "bytes_per_sec": 306882654.21058607,
    "iops": 73.166526367804067
}
osd.11: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.719244094,
    "bytes_per_sec": 394867759.89298147,
    "iops": 94.143810246701591
}
osd.12: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 0.58941856999999998,
    "bytes_per_sec": 1821696632.3270066,
    "iops": 434.32632263350644
}
osd.13: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 3.4738759209999999,
    "bytes_per_sec": 309090436.27871132,
    "iops": 73.692902631452398
}
osd.14: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.9597833480000002,
    "bytes_per_sec": 362777169.05379385,
    "iops": 86.49281717629286
}
osd.15: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 3.3194408979999999,
    "bytes_per_sec": 323470685.8757273,
    "iops": 77.121421307498764
}
osd.16: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 12.560259536,
    "bytes_per_sec": 85487232.244083777,
    "iops": 20.381744442959732
}
osd.17: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 11.556078852000001,
    "bytes_per_sec": 92915757.823352724,
    "iops": 22.152842956388646
}
osd.18: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 6.8585015050000004,
    "bytes_per_sec": 156556329.86552796,
    "iops": 37.325937715894689
}
osd.19: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 6.6550230490000004,
    "bytes_per_sec": 161343066.14630628,
    "iops": 38.467184578491754
}
osd.20: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 6.1410729340000003,
    "bytes_per_sec": 174845965.11714381,
    "iops": 41.686526564870789
}
osd.21: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 6.6792572369999998,
    "bytes_per_sec": 160757668.98929513,
    "iops": 38.327615020107061
}
osd.22: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 6.4948021379999998,
    "bytes_per_sec": 165323254.07077706,
    "iops": 39.416135328001275
}
osd.23: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 10.603035271,
    "bytes_per_sec": 101267400.94289365,
    "iops": 24.144029842112936
}
osd.24: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 6.540347584,
    "bytes_per_sec": 164171981.71955749,
    "iops": 39.141650609864591
}
osd.25: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 7.324395429,
    "bytes_per_sec": 146598014.04886711,
    "iops": 34.951690208641793
}
osd.26: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 6.0075337270000002,
    "bytes_per_sec": 178732550.29334602,
    "iops": 42.613160680138115
}
osd.27: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 4.5867764409999996,
    "bytes_per_sec": 234095085.69070461,
    "iops": 55.81261770503631
}
osd.28: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 4.9092274839999996,
    "bytes_per_sec": 218719101.42675313,
    "iops": 52.146697384537013
}
osd.29: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 2.5922967809999999,
    "bytes_per_sec": 414204820.94098622,
    "iops": 98.754124865767054
}
osd.30: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 4.6967182579999998,
    "bytes_per_sec": 228615336.28743374,
    "iops": 54.506143638475834
}
2022-05-17 12_19_03-soserver1 - Proxmox Virtual Environment.jpg2022-05-17 12_18_57-soserver1 - Proxmox Virtual Environment.jpg2022-05-17 12_18_49-soserver1 - Proxmox Virtual Environment.jpg
 
Last edited:
I transformed both controllers of the 2 remaining servers into HBAs, reconfigured and rebalanced ... this is the speed of execution of the first backup ...

Code:
INFO: starting new backup job: vzdump 204 --remove 0 --notes-template '{{guestname}}' --storage pbs1 --node soserver2 --mode snapshot
INFO: Starting Backup of VM 204 (qemu)
INFO: Backup started at 2022-05-26 17:07:12
INFO: status = running
INFO: VM Name: Nodo1nable
INFO: include disk 'sata1' 'ceph_ssd_storage:vm-204-disk-1' 100G
INFO: include disk 'sata4' 'ceph_hdd_storage:vm-204-disk-0' 5000G
INFO: include disk 'efidisk0' 'ceph_ssd_storage:vm-204-disk-0' 528K
INFO: include disk 'tpmstate0' 'ceph_ssd_storage:vm-204-disk-2' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: snapshots found (not included into backup)
INFO: creating Proxmox Backup Server archive 'vm/204/2022-05-26T15:07:12Z'
INFO: attaching TPM drive to QEMU for backup
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '9c2fa9c0-bdd6-4414-b182-9c692aa4a896'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: sata1: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: sata4: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: tpmstate0-backup: dirty-bitmap status: created new
INFO:   0% (748.0 MiB of 5.0 TiB) in 7s, read: 106.9 MiB/s, write: 9.1 MiB/s
INFO:   1% (51.5 GiB of 5.0 TiB) in 32m 40s, read: 26.6 MiB/s, write: 7.3 MiB/s
INFO:   2% (102.1 GiB of 5.0 TiB) in 1h 5m 46s, read: 26.1 MiB/s, write: 3.1 MiB/s
INFO:   3% (153.2 GiB of 5.0 TiB) in 1h 37m 44s, read: 27.3 MiB/s, write: 181.5 KiB/s
INFO:   4% (204.4 GiB of 5.0 TiB) in 2h 11m 36s, read: 25.8 MiB/s, write: 129.0 KiB/s
INFO:   5% (255.3 GiB of 5.0 TiB) in 2h 47m 17s, read: 24.4 MiB/s, write: 4.7 MiB/s
INFO:   6% (306.3 GiB of 5.0 TiB) in 3h 22m 15s, read: 24.9 MiB/s, write: 11.8 MiB/s
INFO:   7% (357.3 GiB of 5.0 TiB) in 3h 55m 34s, read: 26.1 MiB/s, write: 1.5 MiB/s
INFO:   8% (408.2 GiB of 5.0 TiB) in 4h 29m 19s, read: 25.7 MiB/s, write: 748.4 KiB/s
INFO:   9% (459.5 GiB of 5.0 TiB) in 5h 3m 17s, read: 25.8 MiB/s, write: 8.0 KiB/s
INFO:  10% (510.5 GiB of 5.0 TiB) in 5h 37m 8s, read: 25.7 MiB/s, write: 22.2 KiB/s
INFO:  11% (561.5 GiB of 5.0 TiB) in 6h 11m 37s, read: 25.2 MiB/s, write: 5.9 KiB/s
INFO:  12% (612.6 GiB of 5.0 TiB) in 6h 44m 50s, read: 26.3 MiB/s, write: 18.5 KiB/s
INFO:  13% (663.5 GiB of 5.0 TiB) in 7h 17m 1s, read: 27.0 MiB/s, write: 135.8 KiB/s
INFO:  14% (714.4 GiB of 5.0 TiB) in 7h 50m 39s, read: 25.8 MiB/s, write: 1.9 MiB/s
INFO:  15% (765.4 GiB of 5.0 TiB) in 8h 23m 50s, read: 26.2 MiB/s, write: 9.0 MiB/s
INFO:  16% (816.3 GiB of 5.0 TiB) in 8h 55m 27s, read: 27.4 MiB/s, write: 1.2 MiB/s
INFO:  17% (867.0 GiB of 5.0 TiB) in 9h 27m 54s, read: 26.7 MiB/s, write: 14.7 KiB/s
INFO:  18% (918.3 GiB of 5.0 TiB) in 10h 35s, read: 26.8 MiB/s, write: 25.1 KiB/s
INFO:  19% (969.4 GiB of 5.0 TiB) in 10h 34m 43s, read: 25.6 MiB/s, write: 18.0 KiB/s
INFO:  20% (1020.1 GiB of 5.0 TiB) in 11h 9m 24s, read: 24.9 MiB/s, write: 15.4 MiB/s
INFO:  21% (1.0 TiB of 5.0 TiB) in 11h 44m 16s, read: 25.1 MiB/s, write: 25.0 MiB/s
INFO:  22% (1.1 TiB of 5.0 TiB) in 12h 15m 47s, read: 27.4 MiB/s, write: 3.5 MiB/s
INFO:  23% (1.1 TiB of 5.0 TiB) in 12h 49m 12s, read: 26.2 MiB/s, write: 8.2 KiB/s
INFO:  24% (1.2 TiB of 5.0 TiB) in 13h 24m 45s, read: 24.3 MiB/s, write: 7.7 KiB/s
INFO:  25% (1.2 TiB of 5.0 TiB) in 13h 57m 46s, read: 26.5 MiB/s, write: 6.2 KiB/s
INFO:  26% (1.3 TiB of 5.0 TiB) in 14h 29m 49s, read: 27.0 MiB/s, write: 14.9 KiB/s
INFO:  27% (1.3 TiB of 5.0 TiB) in 15h 3m 38s, read: 25.9 MiB/s, write: 2.6 MiB/s
INFO:  28% (1.4 TiB of 5.0 TiB) in 15h 36m 4s, read: 26.6 MiB/s, write: 3.9 MiB/s
INFO:  29% (1.4 TiB of 5.0 TiB) in 16h 8m 25s, read: 27.1 MiB/s, write: 175.2 KiB/s
INFO:  30% (1.5 TiB of 5.0 TiB) in 16h 41m 3s, read: 26.6 MiB/s, write: 12.6 KiB/s
INFO:  31% (1.5 TiB of 5.0 TiB) in 17h 13m 43s, read: 26.5 MiB/s, write: 37.6 KiB/s
INFO:  32% (1.6 TiB of 5.0 TiB) in 17h 46m 37s, read: 26.6 MiB/s, write: 18.7 KiB/s
INFO:  33% (1.6 TiB of 5.0 TiB) in 18h 18m 7s, read: 27.5 MiB/s, write: 15.2 KiB/s
INFO:  34% (1.7 TiB of 5.0 TiB) in 18h 51m 1s, read: 26.5 MiB/s, write: 2.1 KiB/s
INFO:  35% (1.7 TiB of 5.0 TiB) in 19h 24m 30s, read: 26.0 MiB/s, write: 216.1 KiB/s
INFO:  36% (1.8 TiB of 5.0 TiB) in 19h 56m 39s, read: 27.2 MiB/s, write: 8.5 KiB/s
INFO:  37% (1.8 TiB of 5.0 TiB) in 20h 28m 29s, read: 27.1 MiB/s, write: 12.9 KiB/s
INFO:  38% (1.9 TiB of 5.0 TiB) in 21h 1m 38s, read: 26.4 MiB/s, write: 10.1 MiB/s
INFO:  39% (1.9 TiB of 5.0 TiB) in 21h 33m 22s, read: 27.2 MiB/s, write: 1.5 MiB/s
INFO:  40% (2.0 TiB of 5.0 TiB) in 22h 6m 43s, read: 26.2 MiB/s, write: 6.1 KiB/s
INFO:  41% (2.0 TiB of 5.0 TiB) in 22h 39m 27s, read: 26.5 MiB/s, write: 58.4 KiB/s

And they are "only" 5TB of which only 1 is occupied
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!