Where is the bottleneck ? More than 3 days to restore a vm over lan

Fathi

Renowned Member
May 13, 2016
133
4
83
52
Tunis, Tunisia
Hi,

First of all, thanks to the proxmox team for the great software they are making and also for making it open source. The following is not a critisism but i want to understand what am i doing wrong.

It took me more than 73 hours to restore à 3.7Tb vm over a gigabyte lan. I have been unable to do restauration over a FO internet connection so i brought the pbs server back to office and retried. The following is the result of the successful restoration.
I read somewhere in this forum that integrity cheks during restoration could be the bottleneck. Both pve and pbs host vm discs and backups on zfs. Given that zfs has a lot of features to protect data, among which against bit rots, is it necessary to do periodic backup verification, and especially during restoration when integrity is guaranteed by zfs ?
Discs on both sides are ironwalf sata, pve is Intel I7 and pbs is Intel I5, both with 8Gb ram.

open block backend for target '/dev/zvol/zlocal/vm-100-disk-0'
starting to restore snapshot 'vm/100/2024-09-10T02:30:18Z'
download and verify backup index
progress 1% (read 52613349376 bytes, zeroes = 14% (7839154176 bytes), duration 3423 sec)
progress 2% (read 105226698752 bytes, zeroes = 23% (24901582848 bytes), duration 5697 sec)
progress 3% (read 157840048128 bytes, zeroes = 23% (36947623936 bytes), duration 8934 sec)
progress 4% (read 210453397504 bytes, zeroes = 34% (71827456000 bytes), duration 10221 sec)
progress 5% (read 263066746880 bytes, zeroes = 27% (72561459200 bytes), duration 12954 sec)
progress 6% (read 315680096256 bytes, zeroes = 23% (73601646592 bytes), duration 15730 sec)
progress 7% (read 368293445632 bytes, zeroes = 20% (74948018176 bytes), duration 18532 sec)
progress 8% (read 420906795008 bytes, zeroes = 18% (76474744832 bytes), duration 21949 sec)
progress 9% (read 473520144384 bytes, zeroes = 17% (80572579840 bytes), duration 25194 sec)
progress 10% (read 526133493760 bytes, zeroes = 16% (85588967424 bytes), duration 28542 sec)
progress 11% (read 578746843136 bytes, zeroes = 15% (89640665088 bytes), duration 32217 sec)
progress 12% (read 631360192512 bytes, zeroes = 14% (93386178560 bytes), duration 35551 sec)
progress 13% (read 683973541888 bytes, zeroes = 14% (99019128832 bytes), duration 39424 sec)
progress 14% (read 736586891264 bytes, zeroes = 13% (102601064448 bytes), duration 42920 sec)
progress 15% (read 789200240640 bytes, zeroes = 13% (108267569152 bytes), duration 46253 sec)
progress 16% (read 841813590016 bytes, zeroes = 15% (133001379840 bytes), duration 47954 sec)
progress 17% (read 894426939392 bytes, zeroes = 15% (135761231872 bytes), duration 51616 sec)
progress 18% (read 947040288768 bytes, zeroes = 14% (138688856064 bytes), duration 54542 sec)
progress 19% (read 999653638144 bytes, zeroes = 14% (141645840384 bytes), duration 56489 sec)
progress 20% (read 1052266987520 bytes, zeroes = 13% (144267280384 bytes), duration 58005 sec)
progress 21% (read 1104880336896 bytes, zeroes = 13% (147794690048 bytes), duration 58796 sec)
progress 22% (read 1157493686272 bytes, zeroes = 12% (148436418560 bytes), duration 59603 sec)
progress 23% (read 1210107035648 bytes, zeroes = 12% (150822977536 bytes), duration 63129 sec)
progress 24% (read 1262720385024 bytes, zeroes = 12% (153092096000 bytes), duration 64006 sec)
progress 25% (read 1315333734400 bytes, zeroes = 11% (156275572736 bytes), duration 65230 sec)
progress 26% (read 1367947083776 bytes, zeroes = 11% (159098339328 bytes), duration 67722 sec)
progress 27% (read 1420560433152 bytes, zeroes = 11% (161384235008 bytes), duration 69574 sec)
progress 28% (read 1473173782528 bytes, zeroes = 11% (176445980672 bytes), duration 71945 sec)
progress 29% (read 1525787131904 bytes, zeroes = 11% (179193249792 bytes), duration 73847 sec)
progress 30% (read 1578400481280 bytes, zeroes = 11% (181584003072 bytes), duration 76142 sec)
progress 31% (read 1631013830656 bytes, zeroes = 11% (183534354432 bytes), duration 80011 sec)
progress 32% (read 1683627180032 bytes, zeroes = 11% (185530843136 bytes), duration 84100 sec)
progress 33% (read 1736240529408 bytes, zeroes = 10% (189481877504 bytes), duration 86941 sec)
progress 34% (read 1788853878784 bytes, zeroes = 10% (191461588992 bytes), duration 88954 sec)
progress 35% (read 1841467228160 bytes, zeroes = 10% (193755873280 bytes), duration 90978 sec)
progress 36% (read 1894080577536 bytes, zeroes = 10% (196100489216 bytes), duration 92523 sec)
progress 37% (read 1946693926912 bytes, zeroes = 10% (199070056448 bytes), duration 93813 sec)
progress 38% (read 1999307276288 bytes, zeroes = 10% (201754411008 bytes), duration 96730 sec)
progress 39% (read 2051920625664 bytes, zeroes = 9% (204979830784 bytes), duration 97553 sec)
progress 40% (read 2104533975040 bytes, zeroes = 9% (208125558784 bytes), duration 100311 sec)
progress 41% (read 2157147324416 bytes, zeroes = 9% (211380338688 bytes), duration 104299 sec)
progress 42% (read 2209760673792 bytes, zeroes = 9% (214194716672 bytes), duration 107817 sec)
progress 43% (read 2262374023168 bytes, zeroes = 9% (217441107968 bytes), duration 111243 sec)
progress 44% (read 2314987372544 bytes, zeroes = 9% (220561670144 bytes), duration 114879 sec)
progress 45% (read 2367600721920 bytes, zeroes = 9% (223736758272 bytes), duration 118370 sec)
progress 46% (read 2420214071296 bytes, zeroes = 9% (238865612800 bytes), duration 121386 sec)
progress 47% (read 2472827420672 bytes, zeroes = 9% (239608004608 bytes), duration 125888 sec)
progress 48% (read 2525440770048 bytes, zeroes = 9% (240232955904 bytes), duration 128865 sec)
progress 49% (read 2578054119424 bytes, zeroes = 9% (240950181888 bytes), duration 132859 sec)
progress 50% (read 2630667468800 bytes, zeroes = 9% (241659019264 bytes), duration 136735 sec)
progress 51% (read 2683280818176 bytes, zeroes = 9% (242393022464 bytes), duration 140098 sec)
progress 52% (read 2735894167552 bytes, zeroes = 8% (243097665536 bytes), duration 143599 sec)
progress 53% (read 2788507516928 bytes, zeroes = 8% (243852640256 bytes), duration 147514 sec)
progress 54% (read 2841120866304 bytes, zeroes = 8% (244603420672 bytes), duration 151526 sec)
progress 55% (read 2893734215680 bytes, zeroes = 8% (247401021440 bytes), duration 154359 sec)
progress 56% (read 2946347565056 bytes, zeroes = 8% (253520510976 bytes), duration 157587 sec)
progress 57% (read 2998960914432 bytes, zeroes = 8% (257014366208 bytes), duration 160463 sec)
progress 58% (read 3051574263808 bytes, zeroes = 8% (263821721600 bytes), duration 164387 sec)
progress 59% (read 3104187613184 bytes, zeroes = 8% (275007930368 bytes), duration 166188 sec)
progress 60% (read 3156800962560 bytes, zeroes = 8% (282532511744 bytes), duration 168921 sec)
progress 61% (read 3209414311936 bytes, zeroes = 8% (285808263168 bytes), duration 172404 sec)
progress 62% (read 3262027661312 bytes, zeroes = 9% (295463550976 bytes), duration 175442 sec)
progress 63% (read 3314641010688 bytes, zeroes = 9% (299225841664 bytes), duration 178843 sec)
progress 64% (read 3367254360064 bytes, zeroes = 9% (303298510848 bytes), duration 181492 sec)
progress 65% (read 3419867709440 bytes, zeroes = 9% (309413806080 bytes), duration 183761 sec)
progress 66% (read 3472481058816 bytes, zeroes = 9% (313721356288 bytes), duration 186570 sec)
progress 67% (read 3525094408192 bytes, zeroes = 9% (322009300992 bytes), duration 188577 sec)
progress 68% (read 3577707757568 bytes, zeroes = 9% (339721846784 bytes), duration 190530 sec)
progress 69% (read 3630321106944 bytes, zeroes = 9% (353542078464 bytes), duration 193180 sec)
progress 70% (read 3682934456320 bytes, zeroes = 9% (365839777792 bytes), duration 195739 sec)
progress 71% (read 3735547805696 bytes, zeroes = 10% (384856752128 bytes), duration 197770 sec)
progress 72% (read 3788161155072 bytes, zeroes = 10% (398068809728 bytes), duration 200306 sec)
progress 73% (read 3840774504448 bytes, zeroes = 10% (404884553728 bytes), duration 203763 sec)
progress 74% (read 3893387853824 bytes, zeroes = 10% (412329443328 bytes), duration 206572 sec)
progress 75% (read 3946001203200 bytes, zeroes = 10% (422349635584 bytes), duration 209201 sec)
progress 76% (read 3998614552576 bytes, zeroes = 10% (433758142464 bytes), duration 212358 sec)
progress 77% (read 4051227901952 bytes, zeroes = 11% (447926501376 bytes), duration 215350 sec)
progress 78% (read 4103841251328 bytes, zeroes = 11% (474530971648 bytes), duration 217624 sec)
progress 79% (read 4156454600704 bytes, zeroes = 11% (485855592448 bytes), duration 220896 sec)
progress 80% (read 4209067950080 bytes, zeroes = 11% (500602765312 bytes), duration 223407 sec)
progress 81% (read 4261681299456 bytes, zeroes = 12% (516142661632 bytes), duration 226218 sec)
progress 82% (read 4314294648832 bytes, zeroes = 12% (526343208960 bytes), duration 230093 sec)
progress 83% (read 4366907998208 bytes, zeroes = 12% (530185191424 bytes), duration 235220 sec)
progress 84% (read 4419521347584 bytes, zeroes = 12% (534866034688 bytes), duration 240478 sec)
progress 85% (read 4472134696960 bytes, zeroes = 12% (538838040576 bytes), duration 244083 sec)
progress 86% (read 4524748046336 bytes, zeroes = 12% (546358427648 bytes), duration 247343 sec)
progress 87% (read 4577361395712 bytes, zeroes = 12% (564763033600 bytes), duration 249402 sec)
progress 88% (read 4629974745088 bytes, zeroes = 12% (585659056128 bytes), duration 251665 sec)
progress 89% (read 4682588094464 bytes, zeroes = 13% (614406815744 bytes), duration 253647 sec)
progress 90% (read 4735201443840 bytes, zeroes = 13% (660363804672 bytes), duration 254261 sec)
progress 91% (read 4787814793216 bytes, zeroes = 14% (691946913792 bytes), duration 256196 sec)
progress 92% (read 4840428142592 bytes, zeroes = 14% (723563577344 bytes), duration 258117 sec)
progress 93% (read 4893041491968 bytes, zeroes = 15% (754521735168 bytes), duration 260006 sec)
progress 94% (read 4945654841344 bytes, zeroes = 16% (801766375424 bytes), duration 260359 sec)
progress 95% (read 4998268190720 bytes, zeroes = 16% (832628064256 bytes), duration 262180 sec)
progress 96% (read 5050881540096 bytes, zeroes = 16% (857370263552 bytes), duration 264222 sec)
progress 97% (read 5103494889472 bytes, zeroes = 17% (909983612928 bytes), duration 264222 sec)
progress 98% (read 5156108238848 bytes, zeroes = 18% (962596962304 bytes), duration 264222 sec)
progress 99% (read 5208721588224 bytes, zeroes = 19% (1015210311680 bytes), duration 264222 sec)
progress 100% (read 5261334937600 bytes, zeroes = 20% (1067823661056 bytes), duration 264222 sec)
restore image complete (bytes=5261334937600, duration=264222.86s, speed=18.99MB/s)
rescan volumes...
Execute autostart
TASK OK

TIA
FAthi B.N.
 
It took me more than 73 hours to restore à 3.7Tb vm over a gigabyte lan.
Gigabit?
restore image complete (bytes=5261334937600, duration=264222.86s, speed=18.99MB/s)
You did not tell us what kind of storage that data is on - on the PBS side.

If that is a single classic hard disk, or several of them organized in a classic Raid5/Raid6 I would call it "not good..., but normal".

Remember: PBS is handling all backups by splitting it up into chunks of 4 MiB size. To restore one of these chunks a meta-data read has to be performed, the meta-data has possibly to be updated by a write (to update "access time") and the actual data has to be read. So to get a single chunk several head movements may be required.

That's why SSDs are recommended...
 
Yes
You did not tell us what kind of storage that data is on - on the PBS side.
Single zfs disc on both sides (os, either pve or pbs are on their one hard drives).
If that is a single classic hard disk, or several of them organized in a classic Raid5/Raid6 I would call it "not good..., but normal".

Remember: PBS is handling all backups by splitting it up into chunks of 4 MiB size. To restore one of these chunks a meta-data read has to be performed, the meta-data has possibly to be updated by a write (to update "access time") and the actual data has to be read. So to get a single chunk several head movements may be required.

That's why SSDs are recommended...
I can't afford SSDs prices for now, especially large ones, and combining several of them to obtain large amount of space requires special motherboards, not comodity onces, for now, as you may have understood, i am still using comodity hardware and trying to get the most out of it.
 
I can't afford SSDs prices for now
For backup... that's normal ;-)

My PBS do also utilize rotating rust. To get (for me) acceptable behavior I do this in a zpool with multiple mirrors (aka Raid10, but on ZFS) plus a very important "Special Device" as two small SSDs - these store the meta data. This speeds up things a lot, at least in my scenarios. But restoring from a spindle is slow by design...

You can not have both cheap and fast (and reliable) at the same time...
 
Last edited:
hey,

like @UdoB and @Fathi sayed, your "poor" performance are normal.
Virtualization /backup immediatly depends of theirs hardware. For restoring 3,4tb, having 1/4 of your max theoricaly IO disk is stated by the backup chunking store.

The good question you need to have here is: i'm using the best way to backup this 3,xx Tb VM ? If this VM store a NAS service, maybe disble backup on data disk and save it with another method (like backup-promox like client, not VM) for mastering chirurgicaly restores that you can need to have
 
  • Like
Reactions: Fathi
That's why SSDs are recommended... [...] restoring from a spindle is slow by design...

I feel like the (software) design is just as big of a problem as the actual hardware limitations of HDDs these days. I don't see why even a single decent HDD shouldn't be able to saturate an 1G link reading 4M chunks, provided there isn't much else going on. But software is written by people who've apparently never even heard of a seek penalty these days, so here we are.
Turn off atime, prefetch chunks (async), and add a bit of buffer to the network transfer, should get you most of the way there.
 
  • Like
Reactions: Fathi
Yes

Single zfs disc on both sides (os, either pve or pbs are on their one hard drives).

I can't afford SSDs prices for now, especially large ones, and combining several of them to obtain large amount of space requires special motherboards, not comodity onces, for now, as you may have understood, i am still using comodity hardware and trying to get the most out of it.
ZFS does not require "special motherboards" all you need is the appropriate number of disks needed for your RAID configuration. That is the beauty of ZFS compared to RAID Controllers. ZFS is handled by the Operating System not the Firmware.
 
  • Like
Reactions: Fathi and UdoB
i'm using the best way to backup this 3,xx Tb VM ?
Yes, it is the best. But "best" means different things for different people. I am talking about rotating rust with ZFS:
  • it is cheap
  • it is space saving - de-duplication is like magic ;-)
  • it is reliable - ZFS has many features to make sure you actually get the same data back when you restore it in the distant future. (This is also true for a single disk pool - if you get the data you can be sure it is unaltered. If data is damaged it can not get repaired - this would require some redundancy, which a single device does not have, usually.)
You get the above by trading in:
  • it is slow by design
Of course you can simply drop PBS and do a classic vzdump onto a backup disk. This way you get the maximum speed of the hardware... by losing mentioned features.

Choose your poison :)
 
  • Like
Reactions: Fathi
Last edited:
  • Like
Reactions: Fathi
Yes, it is the best. But "best" means different things for different people. I am talking about rotating rust with ZFS:
  • it is cheap
  • it is space saving - de-duplication is like magic ;-)
  • it is reliable - ZFS has many features to make sure you actually get the same data back when you restore it in the distant future. (This is also true for a single disk pool - if you get the data you can be sure it is unaltered. If data is damaged it can not get repaired - this would require some redundancy, which a single device does not have, usually.)
You get the above by trading in:
  • it is slow by design
Of course you can simply drop PBS and do a classic vzdump onto a backup disk. This way you get the maximum speed of the hardware... by losing mentioned features.

Choose your poison :)
depending too restore type he need. If 3 wdays are so slow for him, backuppc or any kind of similar product can maybe give better result.
^^
 
hey,

like @UdoB and @Fathi sayed, your "poor" performance are normal.
Virtualization /backup immediatly depends of theirs hardware. For restoring 3,4tb, having 1/4 of your max theoricaly IO disk is stated by the backup chunking store.

The good question you need to have here is: i'm using the best way to backup this 3,xx Tb VM ? If this VM store a NAS service, maybe disble backup on data disk and save it with another method (like backup-promox like client, not VM) for mastering chirurgicaly restores that you can need to have
In fact, this is vm contains daily synced copies of nas shares. It used to be a container but backups where slow so i made a vm to take advantage of dirty bitmaps/blocks so when doing backups the pbs clients doesn't need to browse all disc to detect changed and unchanged files/blocks.
I'm also considering truenas=>trunas replication or synchronisation. Have to benchmark this and then decide which to retain as backup/disaster recovery solution.
 
I am talking about rotating rust with ZFS:

But ZFS isn't slow on spinning rust. It was very much designed with spinning rust in mind, and optimised for it as much as possible. Slower on a single disk than something without the data integrity guarantees, the CoW features, etc., like XFS, sure. In a multi-disk setup it flies, especially if you throw in a little flash for a ZIL, L2ARC, and/or special device.
 
  • Like
Reactions: Fathi

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!