Backup mit langsamer Geschwindigkeit

du bist sicher, dass du mit fio auf dem raid10 ausführst...gebe doch filename=/mount/point/file an
 
für throughput gibt es z.B. folgenden Test:
fio --rw=rndrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=/mount/point/file --numjobs=4 --ioengine=libaio --iodepth=32
 
Last edited:
für throughput gibt es z.B. folgenden Test:
fio --rw=rndrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=/mount/point/file --numjobs=4 --ioengine=libaio --iodepth=32
Code:
fio --rw=randrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=/hdd-storage/test --numjobs=4 --ioengine=libaio --iodepth=32
rndrw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
...
fio-3.33
Starting 4 processes
rndrw: Laying out IO file (1 file / 200MiB)

rndrw: (groupid=0, jobs=1): err= 0: pid=3083981: Wed Jun 19 19:54:32 2024
  read: IOPS=115, BW=116MiB/s (122MB/s)(99.0MiB/854msec)
    slat (usec): min=41, max=239, avg=75.25, stdev=30.61
    clat (msec): min=14, max=327, avg=123.87, stdev=81.85
     lat (msec): min=14, max=327, avg=123.94, stdev=81.85
    clat percentiles (msec):
     |  1.00th=[   15],  5.00th=[   15], 10.00th=[   40], 20.00th=[   89],
     | 30.00th=[   95], 40.00th=[   99], 50.00th=[  100], 60.00th=[  102],
     | 70.00th=[  116], 80.00th=[  125], 90.00th=[  300], 95.00th=[  300],
     | 99.00th=[  330], 99.50th=[  330], 99.90th=[  330], 99.95th=[  330],
     | 99.99th=[  330]
   bw (  KiB/s): min=63488, max=63488, per=13.82%, avg=63488.00, stdev= 0.00, samples=1
   iops        : min=   62, max=   62, avg=62.00, stdev= 0.00, samples=1
  write: IOPS=118, BW=118MiB/s (124MB/s)(101MiB/854msec); 0 zone resets
    slat (usec): min=122, max=227520, avg=8231.77, stdev=24260.81
    clat (msec): min=14, max=327, avg=130.19, stdev=83.97
     lat (msec): min=14, max=350, avg=138.42, stdev=86.39
    clat percentiles (msec):
     |  1.00th=[   15],  5.00th=[   39], 10.00th=[   66], 20.00th=[   90],
     | 30.00th=[   95], 40.00th=[  100], 50.00th=[  102], 60.00th=[  116],
     | 70.00th=[  122], 80.00th=[  144], 90.00th=[  300], 95.00th=[  321],
     | 99.00th=[  330], 99.50th=[  330], 99.90th=[  330], 99.95th=[  330],
     | 99.99th=[  330]
   bw (  KiB/s): min=75776, max=75776, per=15.15%, avg=75776.00, stdev= 0.00, samples=1
   iops        : min=   74, max=   74, avg=74.00, stdev= 0.00, samples=1
  lat (msec)   : 20=5.00%, 50=5.50%, 100=39.50%, 250=34.50%, 500=15.50%
  cpu          : usr=1.99%, sys=1.88%, ctx=409, majf=0, minf=11
  IO depths    : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
     issued rwts: total=99,101,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
rndrw: (groupid=0, jobs=1): err= 0: pid=3083982: Wed Jun 19 19:54:32 2024
  read: IOPS=109, BW=110MiB/s (115MB/s)(93.0MiB/848msec)
    slat (usec): min=41, max=137, avg=67.45, stdev=21.71
    clat (msec): min=8, max=327, avg=129.31, stdev=88.97
     lat (msec): min=8, max=327, avg=129.38, stdev=88.96
    clat percentiles (msec):
     |  1.00th=[    9],  5.00th=[   60], 10.00th=[   72], 20.00th=[   78],
     | 30.00th=[   78], 40.00th=[   94], 50.00th=[   99], 60.00th=[  101],
     | 70.00th=[  107], 80.00th=[  126], 90.00th=[  305], 95.00th=[  326],
     | 99.00th=[  330], 99.50th=[  330], 99.90th=[  330], 99.95th=[  330],
     | 99.99th=[  330]
   bw (  KiB/s): min=69632, max=69632, per=15.16%, avg=69632.00, stdev= 0.00, samples=1
   iops        : min=   68, max=   68, avg=68.00, stdev= 0.00, samples=1
  write: IOPS=126, BW=126MiB/s (132MB/s)(107MiB/848msec); 0 zone resets
    slat (usec): min=141, max=227120, avg=7774.02, stdev=23627.32
    clat (msec): min=8, max=344, avg=119.70, stdev=83.08
     lat (msec): min=8, max=345, avg=127.47, stdev=84.89
    clat percentiles (msec):
     |  1.00th=[    9],  5.00th=[   33], 10.00th=[   69], 20.00th=[   77],
     | 30.00th=[   83], 40.00th=[   94], 50.00th=[   95], 60.00th=[  101],
     | 70.00th=[  106], 80.00th=[  121], 90.00th=[  321], 95.00th=[  330],
     | 99.00th=[  347], 99.50th=[  347], 99.90th=[  347], 99.95th=[  347],
     | 99.99th=[  347]
   bw (  KiB/s): min=86016, max=86016, per=17.20%, avg=86016.00, stdev= 0.00, samples=1
   iops        : min=   84, max=   84, avg=84.00, stdev= 0.00, samples=1
  lat (msec)   : 10=1.50%, 50=3.00%, 100=54.00%, 250=26.00%, 500=15.50%
  cpu          : usr=2.24%, sys=1.65%, ctx=423, majf=0, minf=12
  IO depths    : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
     issued rwts: total=93,107,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
rndrw: (groupid=0, jobs=1): err= 0: pid=3083983: Wed Jun 19 19:54:32 2024
  read: IOPS=112, BW=112MiB/s (118MB/s)(94.0MiB/838msec)
    slat (usec): min=40, max=227, avg=70.24, stdev=28.36
    clat (msec): min=21, max=328, avg=118.29, stdev=82.81
     lat (msec): min=21, max=328, avg=118.36, stdev=82.81
    clat percentiles (msec):
     |  1.00th=[   22],  5.00th=[   50], 10.00th=[   72], 20.00th=[   75],
     | 30.00th=[   78], 40.00th=[   94], 50.00th=[   95], 60.00th=[   99],
     | 70.00th=[  101], 80.00th=[  116], 90.00th=[  305], 95.00th=[  326],
     | 99.00th=[  330], 99.50th=[  330], 99.90th=[  330], 99.95th=[  330],
     | 99.99th=[  330]
   bw (  KiB/s): min=67584, max=67584, per=14.72%, avg=67584.00, stdev= 0.00, samples=1
   iops        : min=   66, max=   66, avg=66.00, stdev= 0.00, samples=1
  write: IOPS=126, BW=126MiB/s (133MB/s)(106MiB/838msec); 0 zone resets
    slat (usec): min=134, max=227184, avg=7625.16, stdev=23603.92
    clat (msec): min=21, max=350, avg=129.30, stdev=90.27
     lat (msec): min=21, max=351, avg=136.93, stdev=92.07
    clat percentiles (msec):
     |  1.00th=[   22],  5.00th=[   50], 10.00th=[   71], 20.00th=[   75],
     | 30.00th=[   79], 40.00th=[   95], 50.00th=[  100], 60.00th=[  102],
     | 70.00th=[  116], 80.00th=[  127], 90.00th=[  321], 95.00th=[  326],
     | 99.00th=[  330], 99.50th=[  351], 99.90th=[  351], 99.95th=[  351],
     | 99.99th=[  351]
   bw (  KiB/s): min=79872, max=79872, per=15.97%, avg=79872.00, stdev= 0.00, samples=1
   iops        : min=   78, max=   78, avg=78.00, stdev= 0.00, samples=1
  lat (msec)   : 50=9.00%, 100=51.00%, 250=24.50%, 500=15.50%
  cpu          : usr=2.51%, sys=1.43%, ctx=420, majf=0, minf=11
  IO depths    : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
     issued rwts: total=94,106,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
rndrw: (groupid=0, jobs=1): err= 0: pid=3083984: Wed Jun 19 19:54:32 2024
  read: IOPS=113, BW=114MiB/s (120MB/s)(97.0MiB/851msec)
    slat (usec): min=40, max=196, avg=71.47, stdev=27.59
    clat (msec): min=14, max=305, avg=126.46, stdev=73.90
     lat (msec): min=14, max=305, avg=126.53, stdev=73.90
    clat percentiles (msec):
     |  1.00th=[   15],  5.00th=[   36], 10.00th=[   70], 20.00th=[   87],
     | 30.00th=[   94], 40.00th=[   99], 50.00th=[  102], 60.00th=[  104],
     | 70.00th=[  116], 80.00th=[  144], 90.00th=[  284], 95.00th=[  300],
     | 99.00th=[  305], 99.50th=[  305], 99.90th=[  305], 99.95th=[  305],
     | 99.99th=[  305]
   bw (  KiB/s): min=63615, max=63615, per=13.85%, avg=63615.00, stdev= 0.00, samples=1
   iops        : min=   62, max=   62, avg=62.00, stdev= 0.00, samples=1
  write: IOPS=121, BW=121MiB/s (127MB/s)(103MiB/851msec); 0 zone resets
    slat (usec): min=129, max=223806, avg=8067.89, stdev=23735.10
    clat (msec): min=12, max=324, avg=125.83, stdev=76.45
     lat (msec): min=12, max=324, avg=133.90, stdev=78.36
    clat percentiles (msec):
     |  1.00th=[   13],  5.00th=[   37], 10.00th=[   64], 20.00th=[   86],
     | 30.00th=[   95], 40.00th=[   97], 50.00th=[  100], 60.00th=[  103],
     | 70.00th=[  125], 80.00th=[  144], 90.00th=[  284], 95.00th=[  296],
     | 99.00th=[  305], 99.50th=[  326], 99.90th=[  326], 99.95th=[  326],
     | 99.99th=[  326]
   bw (  KiB/s): min=73875, max=73875, per=14.77%, avg=73875.00, stdev= 0.00, samples=1
   iops        : min=   72, max=   72, avg=72.00, stdev= 0.00, samples=1
  lat (msec)   : 20=2.50%, 50=3.50%, 100=45.00%, 250=33.50%, 500=15.50%
  cpu          : usr=2.35%, sys=1.53%, ctx=403, majf=0, minf=11
  IO depths    : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
     issued rwts: total=97,103,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=448MiB/s (470MB/s), 110MiB/s-116MiB/s (115MB/s-122MB/s), io=383MiB (402MB), run=838-854msec
  WRITE: bw=488MiB/s (512MB/s), 118MiB/s-126MiB/s (124MB/s-133MB/s), io=417MiB (437MB), run=838-854msec

Disk stats (read/write):
    md1: ios=1077/1229, merge=0/0, ticks=24588/47892, in_queue=72480, util=84.61%, aggrios=255/564, aggrmerge=0/9, aggrticks=4904/9921, aggrin_queue=15019, aggrutil=81.23%
  sdh: ios=215/559, merge=0/15, ticks=4676/9293, in_queue=14170, util=75.38%
  sdg: ios=297/559, merge=0/15, ticks=3573/8916, in_queue=12698, util=78.85%
  sdf: ios=189/560, merge=0/3, ticks=2276/8956, in_queue=11403, util=72.69%
  sde: ios=331/560, merge=0/3, ticks=8356/11660, in_queue=20215, util=81.23%
  sdd: ios=245/574, merge=0/10, ticks=6327/10906, in_queue=17436, util=77.38%
  sdc: ios=255/576, merge=0/8, ticks=4220/9800, in_queue=14197, util=75.07%
 
Code:
fio --rw=randrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=/hdd-storage/test --numjobs=1 --ioengine=libaio --iodepth=32
rndrw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
rndrw: Laying out IO file (1 file / 200MiB)

rndrw: (groupid=0, jobs=1): err= 0: pid=3719598: Thu Jun 20 12:33:53 2024
  read: IOPS=529, BW=529MiB/s (555MB/s)(99.0MiB/187msec)
    slat (usec): min=25, max=181, avg=49.10, stdev=29.47
    clat (usec): min=2503, max=78039, avg=26422.71, stdev=16427.15
     lat (usec): min=2639, max=78092, avg=26471.81, stdev=16422.63
    clat percentiles (usec):
     |  1.00th=[ 2507],  5.00th=[ 4752], 10.00th=[ 9241], 20.00th=[13304],
     | 30.00th=[20841], 40.00th=[22152], 50.00th=[22676], 60.00th=[23725],
     | 70.00th=[25560], 80.00th=[33817], 90.00th=[54789], 95.00th=[63177],
     | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119],
     | 99.99th=[78119]
  write: IOPS=540, BW=540MiB/s (566MB/s)(101MiB/187msec); 0 zone resets
    slat (usec): min=64, max=44533, avg=1368.45, stdev=5894.53
    clat (usec): min=2478, max=78699, avg=30648.41, stdev=17894.93
     lat (usec): min=2624, max=78813, avg=32016.86, stdev=17503.11
    clat percentiles (usec):
     |  1.00th=[ 2540],  5.00th=[ 4293], 10.00th=[13304], 20.00th=[21890],
     | 30.00th=[23200], 40.00th=[23462], 50.00th=[25560], 60.00th=[26346],
     | 70.00th=[33817], 80.00th=[43254], 90.00th=[60556], 95.00th=[71828],
     | 99.00th=[77071], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168],
     | 99.99th=[79168]
  lat (msec)   : 4=3.50%, 10=6.00%, 20=12.50%, 50=64.50%, 100=13.50%
  cpu          : usr=4.30%, sys=6.45%, ctx=234, majf=0, minf=11
  IO depths    : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
     issued rwts: total=99,101,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=529MiB/s (555MB/s), 529MiB/s-529MiB/s (555MB/s-555MB/s), io=99.0MiB (104MB), run=187-187msec
  WRITE: bw=540MiB/s (566MB/s), 540MiB/s-540MiB/s (566MB/s-566MB/s), io=101MiB (106MB), run=187-187msec

Disk stats (read/write):
    md1: ios=323/326, merge=0/0, ticks=3084/5204, in_queue=8288, util=57.76%, aggrios=66/134, aggrmerge=0/0, aggrticks=648/1524, aggrin_queue=2172, aggrutil=52.89%
  sdh: ios=36/144, merge=0/0, ticks=144/1264, in_queue=1409, util=39.67%
  sdg: ios=90/144, merge=0/0, ticks=555/1675, in_queue=2230, util=40.77%
  sdf: ios=28/138, merge=0/0, ticks=257/1086, in_queue=1343, util=37.47%
  sde: ios=96/138, merge=0/0, ticks=905/1804, in_queue=2709, util=44.08%
  sdd: ios=36/122, merge=0/0, ticks=229/1016, in_queue=1245, util=36.36%
  sdc: ios=110/122, merge=0/0, ticks=1799/2300, in_queue=4100, util=52.89%
 
Also aus den Werten ergibt sich für mich nicht, warum es beim Backup so langsam ist. Problematisch ist auf jeden Fall, dass bei dir Quelle und Ziel beim Backup gleich ist. Wenn du das ändern könntest, sollte es definitiv schneller sein. Vielleicht kannst du mal einen Test machen mit einer USB-PLatte, die du dran hängst (USB3 aber bitte). Dann richtest du darauf ein Datastore ein und machst mal ein Testbackup.
 
Also aus den Werten ergibt sich für mich nicht, warum es beim Backup so langsam ist. Problematisch ist auf jeden Fall, dass bei dir Quelle und Ziel beim Backup gleich ist. Wenn du das ändern könntest, sollte es definitiv schneller sein. Vielleicht kannst du mal einen Test machen mit einer USB-PLatte, die du dran hängst (USB3 aber bitte). Dann richtest du darauf ein Datastore ein und machst mal ein Testbackup.
Genau die gleiche Konfiguration gleiche Hardware, aber mit PBS auf separaten Server genau das gleiche. PVE und PBS mit 10Gbit Fiber verbunden.
 
Das sieht nur bei euch so aus, weil ZFS in den RAM cached.
Reale Werte sind das mit Sicherheit nicht.
Auf jeden Fall nicht mit einer herkömmlichen Crucial MX500.
 
naja, Äpfel mit Birnen zu vergleichen ist natürlich unsinnig. Eine Referenzmessung wäre natürlich nicht schlecht.
Erzähl doch noch ein bißchen zu deinem neuen PBS. Wie ist der aufgebaut? HW-Controller, Filesystem, .....?
Wäre auch interessant, wie die CPU-Auslastung bei einem Backup auf der Quelle ist.
 
naja, Äpfel mit Birnen zu vergleichen ist natürlich unsinnig. Eine Referenzmessung wäre natürlich nicht schlecht.
Erzähl doch noch ein bißchen zu deinem neuen PBS. Wie ist der aufgebaut? HW-Controller, Filesystem, .....?
Wäre auch interessant, wie die CPU-Auslastung bei einem Backup auf der Quelle ist.
Komplett identisch.
Nur exklusiv für den PBS.
CPU 2x Xeon Silver 4215 5% mit Load 0.76,0.59,0.68 64GB RAM
Haben alle ein perc H730P, aber Festplatten sind im Non-Raid, da SW-Raid verwendet wird.
 
Das sieht nur bei euch so aus, weil ZFS in den RAM cached.
Reale Werte sind das mit Sicherheit nicht.
Auf jeden Fall nicht mit einer herkömmlichen Crucial MX500.
Nun nein - alles ist so, wie ich es beschrieb. Und wollte so nur einen Vergleich von einigen meiner Server mit deinem Beitagen.
So ein Intel Core-i7 4770 ist schon etwas anderes, wie ein Ryzen 5 5600. Die Ryzen 7 oder Ryzen 9 ziehe ich jetzt mal nicht heran.
Mein Tipp ist weiterhin das mdadm raid10 aufzulösen und eine zfs pool mit 3x Stripe aus 2x zfs mirror-{0,1,2} (raid1) anzulegen und das mal zu vergleichen. Wenn jetzt das zfs special device fehlt, dann ist da so.
Ein zfs pool aus 3x zfs stripe aus 2x zfs mirror-{0,1,2} (raid1) ist performanter, als ein zfs pool 2x zfs stripe aus 3x zfs raidz1-{0,1}, da hier die Paritäten auch über die drei Platten des zfs raidz1verteilt geschrieben werden müssen.
 
For the records, ein weitere fio-Testlauf mit 4x Crucial MX500 500GB.

Der ZFS Pool ist wie folgt organisiert:
zfs mirror-0 2x Crucial MX500 500GB (SATA3) - stripe - mirror-1 2x Crucial MX500 500GB (SATA3)

Hardware
Proxmox VE Server Intel Core-i5 11400, 2x 16 GB DDR4, 3200 MT/s, @1.2V, CL22

Code:
Pool: rpool
mirror-0
  ata-CT500MX500SSD1_AAA-part3
  ata-CT500MX500SSD1_BBB-part3
mirror-1
 ata-CT500MX500SSD1_CCC
 ata-CT500MX500SSD1_DDD

Code:
$ fio --rw=randrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=test.raw --numjobs=1 --ioengine=libaio --iodepth=32

rndrw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
rndrw: Laying out IO file (1 file / 200MiB)

rndrw: (groupid=0, jobs=1): err= 0: pid=32090: Thu Jun 20 21:31:05 2024
  read: IOPS=5823, BW=5824MiB/s (6106MB/s)(99.0MiB/17msec)
    slat (nsec): min=66539, max=80524, avg=71218.84, stdev=2340.63
    clat (nsec): min=1033, max=2767.6k, avg=2392845.48, stdev=636371.48
     lat (usec): min=70, max=2837, avg=2464.06, stdev=636.33
    clat percentiles (nsec):
     |  1.00th=[   1032],  5.00th=[ 481280], 10.00th=[1335296],
     | 20.00th=[2506752], 30.00th=[2539520], 40.00th=[2572288],
     | 50.00th=[2605056], 60.00th=[2637824], 70.00th=[2670592],
     | 80.00th=[2703360], 90.00th=[2703360], 95.00th=[2736128],
     | 99.00th=[2768896], 99.50th=[2768896], 99.90th=[2768896],
     | 99.95th=[2768896], 99.99th=[2768896]
  write: IOPS=5941, BW=5941MiB/s (6230MB/s)(101MiB/17msec); 0 zone resets
    slat (usec): min=83, max=103, avg=95.41, stdev= 5.14
    clat (usec): min=71, max=2753, avg=2432.61, stdev=533.62
     lat (usec): min=166, max=2849, avg=2528.02, stdev=532.98
    clat percentiles (usec):
     |  1.00th=[  383],  5.00th=[  996], 10.00th=[ 1778], 20.00th=[ 2507],
     | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2671],
     | 70.00th=[ 2671], 80.00th=[ 2671], 90.00th=[ 2704], 95.00th=[ 2737],
     | 99.00th=[ 2737], 99.50th=[ 2769], 99.90th=[ 2769], 99.95th=[ 2769],
     | 99.99th=[ 2769]
  lat (usec)   : 2=0.50%, 100=0.50%, 250=1.00%, 500=1.50%, 750=1.50%
  lat (usec)   : 1000=1.50%
  lat (msec)   : 2=5.50%, 4=88.00%
  cpu          : usr=37.50%, sys=62.50%, ctx=0, majf=0, minf=11
  IO depths    : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
     issued rwts: total=99,101,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=5824MiB/s (6106MB/s), 5824MiB/s-5824MiB/s (6106MB/s-6106MB/s), io=99.0MiB (104MB), run=17-17msec
  WRITE: bw=5941MiB/s (6230MB/s), 5941MiB/s-5941MiB/s (6230MB/s-6230MB/s), io=101MiB (106MB), run=17-17msec
 
Last edited:
Hier noch ein Proxmox VE Backup einer VM "linux-mint" im Hauseigenen zst-Format auf den ZFS Pool rpool.

Code:
INFO: starting new backup job: vzdump 300 --mode snapshot --notes-template '{{guestname}}' --compress zstd --notification-mode auto --node proxmox-pve --storage pve-backup --remove 0
INFO: Starting Backup of VM 300 (qemu)
INFO: Backup started at 2024-06-20 21:27:44
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: linux-mint
INFO: include disk 'scsi0' 'ssd-data:vm-300-disk-0' 32G
INFO: creating vzdump archive '/rpool/pve/backup/dump/vzdump-qemu-300-2024_06_20-21_27_44.vma.zst'
INFO: starting kvm to execute backup task
INFO: started backup task 'd6a80799-a0ea-4b37-a624-0d3c0f85d432'
INFO:   9% (3.1 GiB of 32.0 GiB) in 3s, read: 1.0 GiB/s, write: 325.8 MiB/s
INFO:  14% (4.8 GiB of 32.0 GiB) in 6s, read: 580.0 MiB/s, write: 407.9 MiB/s
INFO:  23% (7.5 GiB of 32.0 GiB) in 9s, read: 913.5 MiB/s, write: 382.1 MiB/s
INFO:  30% (9.8 GiB of 32.0 GiB) in 12s, read: 796.3 MiB/s, write: 312.0 MiB/s
INFO:  33% (10.9 GiB of 32.0 GiB) in 15s, read: 364.9 MiB/s, write: 262.5 MiB/s
INFO:  38% (12.2 GiB of 32.0 GiB) in 18s, read: 476.1 MiB/s, write: 293.6 MiB/s
INFO:  41% (13.4 GiB of 32.0 GiB) in 21s, read: 387.9 MiB/s, write: 285.7 MiB/s
INFO:  51% (16.3 GiB of 32.0 GiB) in 24s, read: 1010.2 MiB/s, write: 288.0 MiB/s
INFO:  65% (20.9 GiB of 32.0 GiB) in 27s, read: 1.5 GiB/s, write: 313.6 MiB/s
INFO:  74% (23.8 GiB of 32.0 GiB) in 30s, read: 988.9 MiB/s, write: 281.2 MiB/s
INFO:  83% (26.8 GiB of 32.0 GiB) in 33s, read: 1020.4 MiB/s, write: 309.4 MiB/s
INFO:  88% (28.2 GiB of 32.0 GiB) in 36s, read: 484.7 MiB/s, write: 276.2 MiB/s
INFO:  91% (29.4 GiB of 32.0 GiB) in 39s, read: 408.4 MiB/s, write: 283.0 MiB/s
INFO: 100% (32.0 GiB of 32.0 GiB) in 41s, read: 1.3 GiB/s, write: 280.5 MiB/s
INFO: backup is sparse: 19.67 GiB (61%) total zero data
INFO: transferred 32.00 GiB in 41 seconds (799.2 MiB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 4.27GB
INFO: adding notes to backup
trying to acquire lock...
 OK
INFO: Finished Backup of VM 300 (00:00:43)
INFO: Backup finished at 2024-06-20 21:28:27
INFO: Backup job finished successfully
TASK OK
 
Last edited:
Hat das einen bestimmten Grund, dass du den pve noch nicht geupgradet hast?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!