Backup mit langsamer Geschwindigkeit

fstaske

Active Member
Dec 31, 2016
21
0
41
24
Ich habe einen PVE Node, auf dem der PBS direkt mit installiert ist.
Aus mir nicht erklärbaren Gründen ist der Backup Vorgang sehr langsam.

PBS und VM Storage sind auf dem selben Volume.
PVE 8.0.3
PBS 3.0-1

proxmox-backup-client benchmark gibt folgendes aus:

┌───────────────────────────────────┬────────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 750.52 MB/s (61%) │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 450.47 MB/s (22%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed │ 487.72 MB/s (65%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed │ 778.59 MB/s (65%) │
├───────────────────────────────────┼────────────────────┤
│ Chunk verification speed │ 280.86 MB/s (37%) │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed │ 1599.85 MB/s (44%) │
└───────────────────────────────────┴────────────────────┘

Backup Prozess:

INFO: creating Proxmox Backup Server archive 'vm/102/2024-06-17T18:00:05Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '60fd5995-ed01-4b97-a5a2-a180a3af1b9f'
INFO: resuming VM again
INFO: scsi0: dirty-bitmap status: OK (15.1 GiB of 100.0 GiB dirty)
INFO: scsi1: dirty-bitmap status: OK (139.9 GiB of 6.8 TiB dirty)
INFO: scsi2: dirty-bitmap status: OK (4.3 GiB of 600.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 159.4 GiB dirty of 7.5 TiB total
INFO: 0% (656.0 MiB of 159.4 GiB) in 4s, read: 164.0 MiB/s, write: 163.0 MiB/s
INFO: 1% (1.6 GiB of 159.4 GiB) in 26s, read: 45.1 MiB/s, write: 44.2 MiB/s
INFO: 2% (3.2 GiB of 159.4 GiB) in 1m 5s, read: 42.2 MiB/s, write: 42.1 MiB/s
INFO: 3% (4.8 GiB of 159.4 GiB) in 1m 45s, read: 41.7 MiB/s, write: 39.9 MiB/s
INFO: 4% (6.4 GiB of 159.4 GiB) in 2m 15s, read: 53.2 MiB/s, write: 53.2 MiB/s
INFO: 5% (8.0 GiB of 159.4 GiB) in 2m 53s, read: 42.4 MiB/s, write: 42.4 MiB/s
INFO: 6% (9.6 GiB of 159.4 GiB) in 3m 36s, read: 38.2 MiB/s, write: 38.1 MiB/s
INFO: 7% (11.2 GiB of 159.4 GiB) in 4m 19s, read: 38.0 MiB/s, write: 38.0 MiB/s
INFO: 8% (12.8 GiB of 159.4 GiB) in 5m 3s, read: 37.3 MiB/s, write: 37.3 MiB/s
INFO: 9% (14.4 GiB of 159.4 GiB) in 5m 43s, read: 40.7 MiB/s, write: 40.7 MiB/s
INFO: 10% (16.0 GiB of 159.4 GiB) in 6m 21s, read: 42.7 MiB/s, write: 42.7 MiB/s
INFO: 11% (17.6 GiB of 159.4 GiB) in 7m 7s, read: 35.7 MiB/s, write: 35.7 MiB/s
INFO: 12% (19.1 GiB of 159.4 GiB) in 7m 55s, read: 33.4 MiB/s, write: 33.4 MiB/s
INFO: 13% (20.7 GiB of 159.4 GiB) in 8m 48s, read: 31.2 MiB/s, write: 31.2 MiB/s
INFO: 14% (22.3 GiB of 159.4 GiB) in 9m 38s, read: 32.4 MiB/s, write: 32.4 MiB/s
INFO: 15% (23.9 GiB of 159.4 GiB) in 10m 27s, read: 33.1 MiB/s, write: 33.1 MiB/s
INFO: 16% (25.5 GiB of 159.4 GiB) in 11m 21s, read: 30.6 MiB/s, write: 30.6 MiB/s
INFO: 17% (27.1 GiB of 159.4 GiB) in 12m 6s, read: 36.1 MiB/s, write: 36.1 MiB/s
INFO: 18% (28.7 GiB of 159.4 GiB) in 12m 59s, read: 31.2 MiB/s, write: 31.2 MiB/s
INFO: 19% (30.3 GiB of 159.4 GiB) in 13m 46s, read: 34.3 MiB/s, write: 34.3 MiB/s
INFO: 20% (31.9 GiB of 159.4 GiB) in 14m 42s, read: 29.1 MiB/s, write: 29.1 MiB/s
INFO: 21% (33.5 GiB of 159.4 GiB) in 15m 36s, read: 30.0 MiB/s, write: 30.0 MiB/s
INFO: 22% (35.1 GiB of 159.4 GiB) in 16m 29s, read: 31.3 MiB/s, write: 31.3 MiB/s
INFO: 23% (36.7 GiB of 159.4 GiB) in 17m 22s, read: 30.8 MiB/s, write: 30.8 MiB/s
INFO: 24% (38.3 GiB of 159.4 GiB) in 18m 17s, read: 29.7 MiB/s, write: 29.7 MiB/s
INFO: 25% (39.9 GiB of 159.4 GiB) in 19m 3s, read: 35.3 MiB/s, write: 35.3 MiB/s
INFO: 26% (41.4 GiB of 159.4 GiB) in 19m 55s, read: 31.2 MiB/s, write: 31.2 MiB/s
INFO: 27% (43.1 GiB of 159.4 GiB) in 20m 53s, read: 28.3 MiB/s, write: 28.3 MiB/s
INFO: 28% (44.6 GiB of 159.4 GiB) in 21m 37s, read: 36.6 MiB/s, write: 36.6 MiB/s
INFO: 29% (46.2 GiB of 159.4 GiB) in 22m 30s, read: 31.4 MiB/s, write: 31.4 MiB/s
INFO: 30% (47.8 GiB of 159.4 GiB) in 23m 19s, read: 32.7 MiB/s, write: 32.7 MiB/s
INFO: 31% (49.4 GiB of 159.4 GiB) in 24m 17s, read: 28.1 MiB/s, write: 28.1 MiB/s
INFO: 32% (51.0 GiB of 159.4 GiB) in 25m 7s, read: 32.8 MiB/s, write: 32.8 MiB/s
INFO: 33% (52.6 GiB of 159.4 GiB) in 26m 4s, read: 28.5 MiB/s, write: 28.5 MiB/s
INFO: 34% (54.2 GiB of 159.4 GiB) in 27m 2s, read: 28.6 MiB/s, write: 28.6 MiB/s
INFO: 35% (55.8 GiB of 159.4 GiB) in 27m 59s, read: 28.8 MiB/s, write: 28.8 MiB/s
INFO: 36% (57.4 GiB of 159.4 GiB) in 28m 58s, read: 27.4 MiB/s, write: 27.4 MiB/s
INFO: 37% (59.0 GiB of 159.4 GiB) in 29m 52s, read: 30.1 MiB/s, write: 30.1 MiB/s
INFO: 38% (60.6 GiB of 159.4 GiB) in 30m 45s, read: 30.9 MiB/s, write: 30.9 MiB/s
INFO: 39% (62.2 GiB of 159.4 GiB) in 31m 36s, read: 31.9 MiB/s, write: 31.9 MiB/s
INFO: 40% (63.8 GiB of 159.4 GiB) in 32m 25s, read: 33.1 MiB/s, write: 33.1 MiB/s
INFO: 41% (65.4 GiB of 159.4 GiB) in 33m 22s, read: 28.8 MiB/s, write: 28.8 MiB/s
INFO: 42% (66.9 GiB of 159.4 GiB) in 34m 12s, read: 32.6 MiB/s, write: 32.6 MiB/s
INFO: 43% (68.5 GiB of 159.4 GiB) in 35m 3s, read: 31.8 MiB/s, write: 31.8 MiB/s
INFO: 44% (70.1 GiB of 159.4 GiB) in 36m 2s, read: 28.0 MiB/s, write: 28.0 MiB/s
INFO: 45% (71.7 GiB of 159.4 GiB) in 36m 57s, read: 29.7 MiB/s, write: 29.7 MiB/s
INFO: 46% (73.3 GiB of 159.4 GiB) in 37m 53s, read: 29.3 MiB/s, write: 29.3 MiB/s
INFO: 47% (74.9 GiB of 159.4 GiB) in 38m 51s, read: 27.6 MiB/s, write: 27.6 MiB/s
INFO: 48% (76.5 GiB of 159.4 GiB) in 39m 42s, read: 32.5 MiB/s, write: 32.5 MiB/s
INFO: 49% (78.1 GiB of 159.4 GiB) in 40m 43s, read: 26.6 MiB/s, write: 26.6 MiB/s
INFO: 50% (79.7 GiB of 159.4 GiB) in 41m 38s, read: 29.7 MiB/s, write: 29.7 MiB/s
INFO: 51% (81.3 GiB of 159.4 GiB) in 42m 38s, read: 27.0 MiB/s, write: 26.9 MiB/s
INFO: 52% (82.9 GiB of 159.4 GiB) in 43m 39s, read: 27.2 MiB/s, write: 27.2 MiB/s
INFO: 53% (84.5 GiB of 159.4 GiB) in 44m 34s, read: 29.2 MiB/s, write: 29.2 MiB/s
INFO: 54% (86.1 GiB of 159.4 GiB) in 45m 32s, read: 28.5 MiB/s, write: 28.5 MiB/s
INFO: 55% (87.7 GiB of 159.4 GiB) in 46m 34s, read: 26.1 MiB/s, write: 26.1 MiB/s
INFO: 56% (89.3 GiB of 159.4 GiB) in 47m 36s, read: 26.6 MiB/s, write: 26.6 MiB/s
INFO: 57% (90.8 GiB of 159.4 GiB) in 48m 35s, read: 27.3 MiB/s, write: 27.3 MiB/s
INFO: 58% (92.4 GiB of 159.4 GiB) in 49m 39s, read: 25.5 MiB/s, write: 25.5 MiB/s
INFO: 59% (94.0 GiB of 159.4 GiB) in 50m 45s, read: 24.8 MiB/s, write: 24.8 MiB/s
INFO: 60% (95.6 GiB of 159.4 GiB) in 51m 50s, read: 25.1 MiB/s, write: 25.1 MiB/s
INFO: 61% (97.2 GiB of 159.4 GiB) in 52m 46s, read: 29.1 MiB/s, write: 29.1 MiB/s
INFO: 62% (98.8 GiB of 159.4 GiB) in 53m 39s, read: 31.2 MiB/s, write: 31.2 MiB/s
INFO: 63% (100.4 GiB of 159.4 GiB) in 54m 31s, read: 31.2 MiB/s, write: 31.2 MiB/s
INFO: 64% (102.0 GiB of 159.4 GiB) in 55m 23s, read: 31.1 MiB/s, write: 31.1 MiB/s
INFO: 65% (103.6 GiB of 159.4 GiB) in 56m 17s, read: 30.4 MiB/s, write: 30.4 MiB/s
INFO: 66% (105.2 GiB of 159.4 GiB) in 57m 8s, read: 32.0 MiB/s, write: 32.0 MiB/s
INFO: 67% (106.8 GiB of 159.4 GiB) in 58m 11s, read: 26.0 MiB/s, write: 26.0 MiB/s
INFO: 68% (108.4 GiB of 159.4 GiB) in 58m 53s, read: 38.9 MiB/s, write: 38.9 MiB/s
INFO: 69% (110.0 GiB of 159.4 GiB) in 59m 47s, read: 29.9 MiB/s, write: 29.9 MiB/s
INFO: 70% (111.6 GiB of 159.4 GiB) in 1h 37s, read: 32.7 MiB/s, write: 32.7 MiB/s
INFO: 71% (113.2 GiB of 159.4 GiB) in 1h 1m 31s, read: 30.5 MiB/s, write: 30.5 MiB/s
INFO: 72% (114.8 GiB of 159.4 GiB) in 1h 2m 25s, read: 29.9 MiB/s, write: 29.9 MiB/s
INFO: 73% (116.4 GiB of 159.4 GiB) in 1h 3m 18s, read: 30.9 MiB/s, write: 30.9 MiB/s
INFO: 74% (117.9 GiB of 159.4 GiB) in 1h 4m 9s, read: 31.9 MiB/s, write: 31.9 MiB/s
INFO: 75% (119.5 GiB of 159.4 GiB) in 1h 5m 5s, read: 29.1 MiB/s, write: 29.1 MiB/s
INFO: 76% (121.1 GiB of 159.4 GiB) in 1h 5m 59s, read: 30.0 MiB/s, write: 30.0 MiB/s
INFO: 77% (122.7 GiB of 159.4 GiB) in 1h 6m 55s, read: 29.1 MiB/s, write: 29.1 MiB/s
INFO: 78% (124.3 GiB of 159.4 GiB) in 1h 7m 50s, read: 30.2 MiB/s, write: 30.2 MiB/s
INFO: 79% (125.9 GiB of 159.4 GiB) in 1h 8m 45s, read: 29.5 MiB/s, write: 29.5 MiB/s
INFO: 80% (127.5 GiB of 159.4 GiB) in 1h 9m 39s, read: 30.1 MiB/s, write: 30.1 MiB/s
INFO: 81% (129.1 GiB of 159.4 GiB) in 1h 10m 35s, read: 29.1 MiB/s, write: 29.1 MiB/s
INFO: 82% (130.7 GiB of 159.4 GiB) in 1h 11m 31s, read: 29.3 MiB/s, write: 29.3 MiB/s
INFO: 83% (132.3 GiB of 159.4 GiB) in 1h 12m 33s, read: 26.6 MiB/s, write: 26.6 MiB/s
INFO: 84% (133.9 GiB of 159.4 GiB) in 1h 13m 28s, read: 29.5 MiB/s, write: 29.5 MiB/s
INFO: 85% (135.5 GiB of 159.4 GiB) in 1h 14m 23s, read: 29.7 MiB/s, write: 29.7 MiB/s
INFO: 86% (137.1 GiB of 159.4 GiB) in 1h 15m 19s, read: 28.8 MiB/s, write: 28.8 MiB/s
INFO: 87% (138.7 GiB of 159.4 GiB) in 1h 16m 13s, read: 30.5 MiB/s, write: 30.5 MiB/s
INFO: 88% (140.3 GiB of 159.4 GiB) in 1h 17m 7s, read: 30.0 MiB/s, write: 30.0 MiB/s
INFO: 89% (141.8 GiB of 159.4 GiB) in 1h 18m 3s, read: 29.1 MiB/s, write: 29.1 MiB/s
INFO: 90% (143.4 GiB of 159.4 GiB) in 1h 18m 56s, read: 30.9 MiB/s, write: 30.9 MiB/s
INFO: 91% (145.0 GiB of 159.4 GiB) in 1h 19m 40s, read: 36.9 MiB/s, write: 36.9 MiB/s
INFO: 92% (146.6 GiB of 159.4 GiB) in 1h 20m 26s, read: 36.0 MiB/s, write: 35.6 MiB/s
INFO: 93% (148.2 GiB of 159.4 GiB) in 1h 21m 5s, read: 41.1 MiB/s, write: 41.1 MiB/s
INFO: 94% (149.8 GiB of 159.4 GiB) in 1h 21m 44s, read: 42.3 MiB/s, write: 42.3 MiB/s
INFO: 95% (151.4 GiB of 159.4 GiB) in 1h 22m 19s, read: 46.4 MiB/s, write: 46.3 MiB/s
INFO: 96% (153.0 GiB of 159.4 GiB) in 1h 23m, read: 39.9 MiB/s, write: 39.7 MiB/s
INFO: 97% (154.6 GiB of 159.4 GiB) in 1h 23m 40s, read: 41.4 MiB/s, write: 41.4 MiB/s
INFO: 98% (156.2 GiB of 159.4 GiB) in 1h 24m 22s, read: 38.5 MiB/s, write: 38.0 MiB/s
INFO: 99% (157.8 GiB of 159.4 GiB) in 1h 24m 52s, read: 54.7 MiB/s, write: 37.5 MiB/s
INFO: 100% (159.4 GiB of 159.4 GiB) in 1h 25m 37s, read: 35.6 MiB/s, write: 35.6 MiB/s
INFO: backup is sparse: 544.00 MiB (0%) total zero data
INFO: backup was done incrementally, reused 7.36 TiB (97%)
INFO: transferred 159.37 GiB in 5239 seconds (31.1 MiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 102 (01:27:41)
 
Storage ist ein MDADM RAID10 mit 6 HDD's mit je 12TB, direkt im PVE/PBS verbaut, nicht auf einem Netzwerkspeicher.
 
naja SATA, sollte trotzdem nicht so langsam sein. Mach doch mal ein test mit fio(rndread und rndwrite)!
 
Wie ist denn das MDADM RAID10 mit den 6 HDDs strukturiert?
Als 3x 2x HDDs (Ŕaid1)?
Ich hatte mein erstes NAS, Proxmox BS auch nur mit HDDs, als ZFS RaidZ1, aber ohne ZFS special device, am laufen, und da sahen die Transferraten auch schlecht aus.
Das lag daran, dass die maximale IO der HDD nicht hoch war und so mit auf die Metadaten gewartet werden musste.
Nun sollte man bei einem Proxmox BS im bedenken, der sehr hohe IO-Bedarf an den Speicher.
Heute ist es ein ZFS RaidZ1-0 mit 3x HDD - RaidZ1-1 mit 3x HDD mit ZFS special device ZFS Mirror auf SSD Basis.
Code:
    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync set-A   /dev/sdc1
       1       8       49        1      active sync set-B   /dev/sdd1
       2       8       65        2      active sync set-A   /dev/sde1
       3       8       81        3      active sync set-B   /dev/sdf1
       4       8       97        4      active sync set-A   /dev/sdg1
       5       8      113        5      active sync set-B   /dev/sdh1
 
Wenn du ext4 oder btrfs auf deinem Raid10 hast, ist das auch schnell bzw. schneller. zfs ist halt viel langsamer als die üblichen Dateisysteme, deshalb hilft dann ein special device auf ssd für gc beim PBS. Mach mal Performancetests deines Raids, damit man den Fehler eingrenzen kann.
 
Wenn du ext4 oder btrfs auf deinem Raid10 hast, ist das auch schnell bzw. schneller. zfs ist halt viel langsamer als die üblichen Dateisysteme, deshalb hilft dann ein special device auf ssd für gc beim PBS. Mach mal Performancetests deines Raids, damit man den Fehler eingrenzen kann.
RAID 10 ist auf ext4. ZFS ist nach meiner Erfahrung auch immer langsamer gewesen, daher wurde hier eigentlich bewusst kein ZFS verwendet.
Welchen Performance Test würdest du vorschlagen, um aussagekräftige Werte zu bekommen?

Code:
dd if=/dev/zero of=output bs=2G count=1
0+1 Datensätze ein
0+1 Datensätze aus
2147479552 Bytes (2,1 GB, 2,0 GiB) kopiert, 3,17233 s, 677 MB/s
 
Code:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/md1
seq_read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=120MiB/s][r=30.7k IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=2939685: Wed Jun 19 15:34:33 2024
  read: IOPS=30.0k, BW=117MiB/s (123MB/s)(7032MiB/60001msec)
    slat (usec): min=2, max=250, avg= 6.11, stdev= 1.08
    clat (nsec): min=653, max=31709k, avg=26591.54, stdev=64865.75
     lat (usec): min=29, max=31715, avg=32.70, stdev=64.91
    clat percentiles (usec):
     |  1.00th=[   25],  5.00th=[   25], 10.00th=[   26], 20.00th=[   26],
     | 30.00th=[   26], 40.00th=[   26], 50.00th=[   26], 60.00th=[   26],
     | 70.00th=[   26], 80.00th=[   26], 90.00th=[   26], 95.00th=[   28],
     | 99.00th=[   58], 99.50th=[   60], 99.90th=[   93], 99.95th=[  208],
     | 99.99th=[  253]
   bw (  KiB/s): min=31568, max=126992, per=100.00%, avg=120157.45, stdev=14064.91, samples=119
   iops        : min= 7892, max=31748, avg=30039.38, stdev=3516.22, samples=119
  lat (nsec)   : 750=0.01%, 1000=0.01%
  lat (usec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.03%, 50=98.09%
  lat (usec)   : 100=1.79%, 250=0.08%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=4.27%, sys=18.83%, ctx=3600119, majf=0, minf=49
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1800242,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=7032MiB (7374MB), run=60001-60001msec

Disk stats (read/write):
    md1: ios=1796631/16, merge=0/0, ticks=48536/376, in_queue=48912, util=99.73%, aggrios=300040/38, aggrmerge=0/0, aggrticks=8518/198, aggrin_queue=8887, aggrutil=64.84%
  sdh: ios=0/39, merge=0/2, ticks=0/186, in_queue=361, util=0.43%
  sdg: ios=600065/39, merge=0/2, ticks=17102/221, in_queue=17496, util=64.84%
  sdf: ios=0/36, merge=0/0, ticks=0/170, in_queue=327, util=0.40%
  sde: ios=600064/36, merge=0/0, ticks=17003/191, in_queue=17350, util=64.42%
  sdd: ios=0/41, merge=0/0, ticks=0/211, in_queue=401, util=0.45%
  sdc: ios=600114/41, merge=0/0, ticks=17004/210, in_queue=17392, util=64.51%
 
Code:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=1M --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/md1
seq_read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=791MiB/s][r=791 IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=2941000: Wed Jun 19 15:36:51 2024
  read: IOPS=744, BW=744MiB/s (780MB/s)(43.6GiB/60001msec)
    slat (usec): min=25, max=574, avg=71.42, stdev=39.39
    clat (usec): min=518, max=45794, avg=1269.55, stdev=1120.25
     lat (usec): min=562, max=45925, avg=1340.98, stdev=1120.07
    clat percentiles (usec):
     |  1.00th=[  570],  5.00th=[  570], 10.00th=[  570], 20.00th=[  586],
     | 30.00th=[  603], 40.00th=[  996], 50.00th=[ 1123], 60.00th=[ 1352],
     | 70.00th=[ 1647], 80.00th=[ 1778], 90.00th=[ 1893], 95.00th=[ 2147],
     | 99.00th=[ 3326], 99.50th=[ 6783], 99.90th=[17171], 99.95th=[20841],
     | 99.99th=[34341]
   bw (  KiB/s): min=122880, max=819200, per=100.00%, avg=762647.66, stdev=94628.93, samples=119
   iops        : min=  120, max=  800, avg=744.77, stdev=92.41, samples=119
  lat (usec)   : 750=32.31%, 1000=8.26%
  lat (msec)   : 2=51.41%, 4=7.44%, 10=0.36%, 20=0.17%, 50=0.06%
  cpu          : usr=0.28%, sys=4.58%, ctx=222147, majf=0, minf=269
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=44653,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=744MiB/s (780MB/s), 744MiB/s-744MiB/s (780MB/s-780MB/s), io=43.6GiB (46.8GB), run=60001-60001msec

Disk stats (read/write):
    md1: ios=178249/39, merge=0/0, ticks=134800/432, in_queue=135232, util=99.81%, aggrios=29769/86, aggrmerge=0/1, aggrticks=22827/399, aggrin_queue=23542, aggrutil=97.47%
  sdh: ios=29792/87, merge=0/5, ticks=21008/425, in_queue=21759, util=97.47%
  sdg: ios=29745/87, merge=0/5, ticks=22929/278, in_queue=23434, util=97.00%
  sdf: ios=29796/84, merge=0/0, ticks=20144/402, in_queue=20814, util=96.99%
  sde: ios=29743/84, merge=0/0, ticks=24076/333, in_queue=24689, util=97.07%
  sdd: ios=29745/88, merge=0/0, ticks=25505/636, in_queue=26661, util=97.34%
  sdc: ios=29793/88, merge=0/0, ticks=23300/324, in_queue=23898, util=97.18%
 
Sonst teste mal: proxmox-backup-client benchmark --repository deinRepo
Da bekommst du schon mal eine Idee wier schnell das Backup werden kann.
 
Gebe mal RW=randrw und filename eine Datei und mit size die Größe an für dein fio
 
Code:
fio --rw=randrw --name=output --size=500M
output: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.33
Starting 1 process
output: Laying out IO file (1 file / 500MiB)
Jobs: 1 (f=1): [m(1)][99.5%][r=2920KiB/s,w=2908KiB/s][r=730,w=727 IOPS][eta 00m:02s]
output: (groupid=0, jobs=1): err= 0: pid=2992695: Wed Jun 19 17:15:38 2024
  read: IOPS=160, BW=643KiB/s (658kB/s)(251MiB/399145msec)
    clat (usec): min=31, max=186626, avg=6201.67, stdev=9325.28
     lat (usec): min=31, max=186626, avg=6201.88, stdev=9325.29
    clat percentiles (usec):
     |  1.00th=[    67],  5.00th=[    81], 10.00th=[    93], 20.00th=[   127],
     | 30.00th=[   155], 40.00th=[   178], 50.00th=[   223], 60.00th=[  3589],
     | 70.00th=[  7767], 80.00th=[ 15533], 90.00th=[ 19530], 95.00th=[ 21627],
     | 99.00th=[ 34341], 99.50th=[ 43254], 99.90th=[ 69731], 99.95th=[ 93848],
     | 99.99th=[147850]
   bw (  KiB/s): min=  176, max= 2792, per=99.07%, avg=637.79, stdev=226.89, samples=797
   iops        : min=   44, max=  698, avg=159.36, stdev=56.69, samples=797
  write: IOPS=159, BW=640KiB/s (655kB/s)(249MiB/399145msec); 0 zone resets
    clat (nsec): min=1783, max=34551k, avg=14818.84, stdev=383088.97
     lat (nsec): min=1816, max=34551k, avg=15096.86, stdev=383096.66
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[    3], 10.00th=[    3], 20.00th=[    3],
     | 30.00th=[    4], 40.00th=[    5], 50.00th=[    6], 60.00th=[    7],
     | 70.00th=[    9], 80.00th=[   12], 90.00th=[   22], 95.00th=[   26],
     | 99.00th=[   30], 99.50th=[   33], 99.90th=[  111], 99.95th=[  186],
     | 99.99th=[24249]
   bw (  KiB/s): min=  136, max= 2920, per=99.10%, avg=634.96, stdev=244.31, samples=797
   iops        : min=   34, max=  730, avg=158.66, stdev=61.04, samples=797
  lat (usec)   : 2=0.18%, 4=18.35%, 10=19.33%, 20=4.97%, 50=6.96%
  lat (usec)   : 100=6.29%, 250=19.42%, 500=1.64%, 750=0.26%, 1000=0.07%
  lat (msec)   : 2=0.68%, 4=2.28%, 10=6.82%, 20=8.35%, 50=4.23%
  lat (msec)   : 100=0.14%, 250=0.02%
  cpu          : usr=0.12%, sys=0.67%, ctx=64350, majf=0, minf=14
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=64163,63837,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=643KiB/s (658kB/s), 643KiB/s-643KiB/s (658kB/s-658kB/s), io=251MiB (263MB), run=399145-399145msec
  WRITE: bw=640KiB/s (655kB/s), 640KiB/s-640KiB/s (655kB/s-655kB/s), io=249MiB (261MB), run=399145-399145msec

Disk stats (read/write):
    md1: ios=63818/61833, merge=0/0, ticks=394868/9802352, in_queue=10197220, util=99.61%, aggrios=10694/21272, aggrmerge=0/220, aggrticks=65908/21538, aggrin_queue=94402, aggrutil=44.93%
  sdh: ios=194/21220, merge=0/198, ticks=3147/19939, in_queue=29066, util=3.92%
  sdg: ios=21162/21226, merge=0/192, ticks=127593/26503, in_queue=162524, util=44.61%
  sdf: ios=201/21413, merge=0/248, ticks=2953/18149, in_queue=26968, util=3.84%
  sde: ios=21081/21419, merge=0/242, ticks=128424/21517, in_queue=157653, util=44.89%
  sdd: ios=202/21176, merge=0/221, ticks=3392/18485, in_queue=27861, util=3.87%
  sdc: ios=21328/21178, merge=0/219, ticks=129941/24639, in_queue=162343, util=44.93%
 
Ergänze noch den Parameter Direct=1
Ichnehme an dein root ist auch auf dem raid10?
 
Ergänze noch den Parameter Direct=1
Ichnehme an dein root ist auch auf dem raid10?
RAID 10 hat eigenen Mountpoint, root ist auf SSD RAID 1.


Code:
fio --rw=randrw --name=output --size=500M --direct=1
output: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.33
Starting 1 process
output: Laying out IO file (1 file / 500MiB)
Jobs: 1 (f=1): [m(1)][99.7%][r=1740KiB/s,w=1860KiB/s][r=435,w=465 IOPS][eta 00m:01s]
output: (groupid=0, jobs=1): err= 0: pid=3031710: Wed Jun 19 18:24:45 2024
  read: IOPS=212, BW=848KiB/s (868kB/s)(251MiB/302608msec)
    clat (usec): min=51, max=203143, avg=4408.25, stdev=8325.37
     lat (usec): min=51, max=203143, avg=4408.49, stdev=8325.37
    clat percentiles (usec):
     |  1.00th=[   62],  5.00th=[   75], 10.00th=[   84], 20.00th=[   97],
     | 30.00th=[  111], 40.00th=[  129], 50.00th=[  157], 60.00th=[  180],
     | 70.00th=[  318], 80.00th=[ 8717], 90.00th=[18482], 95.00th=[21890],
     | 99.00th=[32113], 99.50th=[35390], 99.90th=[52691], 99.95th=[60031],
     | 99.99th=[83362]
   bw (  KiB/s): min=  384, max= 2640, per=99.98%, avg=848.89, stdev=255.91, samples=605
   iops        : min=   96, max=  660, avg=212.11, stdev=63.95, samples=605
  write: IOPS=210, BW=844KiB/s (864kB/s)(249MiB/302608msec); 0 zone resets
    clat (usec): min=94, max=167711, avg=304.80, stdev=1716.29
     lat (usec): min=95, max=167712, avg=305.15, stdev=1716.30
    clat percentiles (usec):
     |  1.00th=[  121],  5.00th=[  137], 10.00th=[  149], 20.00th=[  163],
     | 30.00th=[  178], 40.00th=[  196], 50.00th=[  223], 60.00th=[  243],
     | 70.00th=[  260], 80.00th=[  281], 90.00th=[  334], 95.00th=[  433],
     | 99.00th=[  635], 99.50th=[  734], 99.90th=[23200], 99.95th=[35390],
     | 99.99th=[67634]
   bw (  KiB/s): min=  304, max= 2592, per=100.00%, avg=844.58, stdev=272.26, samples=605
   iops        : min=   76, max=  648, avg=211.03, stdev=68.03, samples=605
  lat (usec)   : 100=11.06%, 250=55.35%, 500=17.69%, 750=1.62%, 1000=0.05%
  lat (msec)   : 2=0.27%, 4=0.89%, 10=3.60%, 20=5.49%, 50=3.89%
  lat (msec)   : 100=0.07%, 250=0.01%
  cpu          : usr=0.21%, sys=1.03%, ctx=128039, majf=0, minf=21
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=64163,63837,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=848KiB/s (868kB/s), 848KiB/s-848KiB/s (868kB/s-868kB/s), io=251MiB (263MB), run=302608-302608msec
  WRITE: bw=844KiB/s (864kB/s), 844KiB/s-844KiB/s (864kB/s-864kB/s), io=249MiB (261MB), run=302608-302608msec

Disk stats (read/write):
    md1: ios=64165/64199, merge=0/0, ticks=281408/33764, in_queue=315172, util=99.41%, aggrios=10694/21523, aggrmerge=0/32, aggrticks=46960/8712, aggrin_queue=58618, aggrutil=49.66%
 
du bist sicher, dass du mit fio auf dem raid10 ausführst...gebe doch filename=/mount/point/file an
 
für throughput gibt es z.B. folgenden Test:
fio --rw=rndrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=/mount/point/file --numjobs=4 --ioengine=libaio --iodepth=32
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!