[SOLVED] Restore speed slow ?

SRU

Member
Dec 2, 2020
34
3
13
24
hello,
For backup storage, we are using a software raid from 10 hdd, not ssd configured into zfs raidz2-0 using the gui.
We see 160 MBps restore speed in the logs.
As we have some multi-TB Servers to restore, i would like to ask if that can be optimized.

Code:
Proxmox
Virtual Environment 7.1-10
Storage 'pmb-03' on node 'p12'
Search:
Logs
()
Using encryption key from file descriptor..
Fingerprint: xxx
Using encryption key from file descriptor..
Fingerprint: xxx
new volume ID is 'xxx:vm-20132-disk-0'
new volume ID is 'xxx:vm-20132-disk-2'
restore proxmox backup image: /usr/bin/pbs-restore --repository backup@pbs@xxx.xxx.xxx.xxx:backup-pool-3 vm/132/2022-03-12T19:28:04Z drive-sata0.img.fidx 'rbd:datacenter01/vm-20132-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/datacenter01.keyring' --verbose --format raw --keyfile /etc/pve/priv/storage/pmb-03.enc --skip-zero
connecting to repository 'backup@pbs@xxx.xxx.xxx.xxx:backup-pool-3'
open block backend for target 'rbd:datacenter01/vm-20132-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/datacenter01.keyring'
starting to restore snapshot 'vm/132/2022-03-12T19:28:04Z'
download and verify backup index
progress 1% (read 2684354560 bytes, zeroes = 3% (104857600 bytes), duration 66 sec)
...
progress 100% (read 268.435.456.000 bytes, zeroes = 67% (181781135360 bytes), duration 1595 sec)
restore image complete (bytes=268435456000, duration=1595.59s, speed=160.44MB/s)


lsblk
-----

NAME              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0  14.6T  0 disk 
├─sda1              8:1    0  14.6T  0 part 
└─sda9              8:9    0     8M  0 part 
sdb                 8:16   0  14.6T  0 disk 
├─sdb1              8:17   0  14.6T  0 part 
└─sdb9              8:25   0     8M  0 part 
sdc                 8:32   0  14.6T  0 disk 
├─sdc1              8:33   0  14.6T  0 part 
└─sdc9              8:41   0     8M  0 part 
sdd                 8:48   0  14.6T  0 disk 
├─sdd1              8:49   0  14.6T  0 part 
└─sdd9              8:57   0     8M  0 part 
sde                 8:64   0  14.6T  0 disk 
├─sde1              8:65   0  14.6T  0 part 
└─sde9              8:73   0     8M  0 part 
sdf                 8:80   0  14.6T  0 disk 
├─sdf1              8:81   0  14.6T  0 part 
└─sdf9              8:89   0     8M  0 part 
sdg                 8:96   0  14.6T  0 disk 
├─sdg1              8:97   0  14.6T  0 part 
└─sdg9              8:105  0     8M  0 part 
sdh                 8:112  0  14.6T  0 disk 
├─sdh1              8:113  0  14.6T  0 part 
└─sdh9              8:121  0     8M  0 part 
sdi                 8:128  0  14.6T  0 disk 
├─sdi1              8:129  0  14.6T  0 part 
└─sdi9              8:137  0     8M  0 part 
sdj                 8:144  0  14.6T  0 disk 
├─sdj1              8:145  0  14.6T  0 part 
└─sdj9              8:153  0     8M  0 part 
nvme0n1           259:0    0 894.3G  0 disk 
├─nvme0n1p1       259:1    0     2G  0 part 
│ └─md0             9:0    0     2G  0 raid1 /boot
├─nvme0n1p2       259:2    0    16G  0 part 
│ └─md1             9:1    0    16G  0 raid1 [SWAP]
└─nvme0n1p3       259:3    0 876.3G  0 part 
  └─md2             9:2    0 876.1G  0 raid1
    ├─vg0-root    253:0    0    20G  0 lvm   /
    ├─vg0-var     253:1    0    20G  0 lvm   /var
    ├─vg0-var_log 253:2    0    40G  0 lvm   /var/log
    ├─vg0-home    253:3    0    10G  0 lvm   /home
    └─vg0-tmp     253:4    0    10G  0 lvm   /tmp
nvme1n1           259:4    0 894.3G  0 disk 
├─nvme1n1p1       259:5    0     2G  0 part 
│ └─md0             9:0    0     2G  0 raid1 /boot
├─nvme1n1p2       259:6    0    16G  0 part 
│ └─md1             9:1    0    16G  0 raid1 [SWAP]
└─nvme1n1p3       259:7    0 876.3G  0 part 
  └─md2             9:2    0 876.1G  0 raid1
    ├─vg0-root    253:0    0    20G  0 lvm   /
    ├─vg0-var     253:1    0    20G  0 lvm   /var
    ├─vg0-var_log 253:2    0    40G  0 lvm   /var/log
    ├─vg0-home    253:3    0    10G  0 lvm   /home
    └─vg0-tmp     253:4    0    10G  0 lvm   /tmp
 
Maybe a L2ARC SSD with "secondarycache=metadata" might help so the metadata of those millions of chunks can be read from SSD instead of HDDs so the HDDs bad IOPS performance isn't bottlenecking so fast? Not sure how much that would help with restores, but atleast it would help with prune and GC jobs. Also keep in mind that a 10 disk raidz2 got just the same IOPS performance as a single HDD. And PBS needs alot of IOPS because everything is stored as small (max 4MB size) chunks. And because of Copy-on-Write and deduplication the reads/write aren't sequential.
 
Last edited:
Thank you very much for the suggestion, much appreciated.
We will evaluate such a setup and compare it to a 1st level / 2nd level setup where the 1st level is pure ssd but much smaller and thus holding just a minimum of backups.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!