Backups getting slower, extremely slow already

Manny Vazquez

Well-Known Member
Jul 12, 2017
106
2
58
Miami, FL USA
Hi,

I have a USB attached to one of the nodes , for a dedicated backup of a few VMs (on the same host, so no network here, just plain USB connection)

it was taking over 2 and half hours to backup 46 GB, so I changed the backup to the local hard drives and still takes a long time
upload_2019-3-1_8-56-57.png
Actually, there is no difference to when it was going to the USB. If anything a little longer.
upload_2019-3-1_8-58-31.png

The other backups I run to either local or USB run fine. (in relation space / time)
upload_2019-3-1_9-0-7.png

I have also noticed that the VMs that take a long time to backup (like 6000) are also running quite slow.

Questions are:
1: What impacts the speed of the backup if the hard drive (usb or local) are not otherwise utilized, like in my case, running at midnight and ALL other backup jobs are stopped for testing.
2: Does the performance of the VM itself has anything to do with the speed of the backup? if so, how can I troubleshoot why the VMs is slow, nothing has changed, no software, nothing new.. I just happened.

I putting here the detailed log of the last 'slow' backup for information in case someone understand anything here.
This is the backup last night that took 2:33 for 46.62 gb.
=================================================================================
Detailed backup logs:

vzdump 6000 --quiet 1 --mailto mvazquez@xxxxxxx.com --compress lzo --storage local --mode snapshot --mailnotification always


6000: 2019-02-28 00:00:03 INFO: Starting Backup of VM 6000 (qemu)
6000: 2019-02-28 00:00:03 INFO: status = running
6000: 2019-02-28 00:00:04 INFO: file /etc/pve/replication.cfg line 4 (section '1001-1') - unable to parse value of 'source': unexpected property 'source'
6000: 2019-02-28 00:00:04 INFO: file /etc/pve/replication.cfg line 11 (section '9035-0') - unable to parse value of 'source': unexpected property 'source'
6000: 2019-02-28 00:00:04 INFO: file /etc/pve/replication.cfg line 15 (section '98000-0') - unable to parse value of 'source': unexpected property 'source'
6000: 2019-02-28 00:00:04 INFO: file /etc/pve/replication.cfg line 19 (section '99401-0') - unable to parse value of 'source': unexpected property 'source'
6000: 2019-02-28 00:00:04 INFO: file /etc/pve/replication.cfg line 23 (section '999402-0') - unable to parse value of 'source': unexpected property 'source'
6000: 2019-02-28 00:00:04 INFO: update VM 6000: -lock backup
6000: 2019-02-28 00:00:04 INFO: VM Name: w2012DocuPhase
6000: 2019-02-28 00:00:04 INFO: include disk 'sata0' 'local-zfs:vm-6000-disk-1' 80G
6000: 2019-02-28 00:00:04 INFO: include disk 'sata1' 'local-zfs:vm-6000-disk-2' 500G
6000: 2019-02-28 00:00:04 INFO: backup mode: snapshot
6000: 2019-02-28 00:00:04 INFO: ionice priority: 7
6000: 2019-02-28 00:00:04 INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-6000-2019_02_28-00_00_03.vma.lzo'
6000: 2019-02-28 01:00:04 ERROR: VM 6000 qmp command 'guest-fsfreeze-freeze' failed - got timeout
6000: 2019-02-28 01:00:14 ERROR: VM 6000 qmp command 'guest-fsfreeze-thaw' failed - got timeout
6000: 2019-02-28 01:00:14 INFO: started backup task 'fdf6aa69-ddd5-4a01-9fee-a533ff8c6fc9'
6000: 2019-02-28 01:00:18 INFO: status: 0% (171048960/622770257920), sparse 0% (49229824), duration 4, read/write 42/30 MB/s
6000: 2019-02-28 01:10:39 INFO: status: 1% (6241386496/622770257920), sparse 0% (185667584), duration 625, read/write 9/9 MB/s
6000: 2019-02-28 01:21:43 INFO: status: 2% (12475170816/622770257920), sparse 0% (468197376), duration 1289, read/write 9/8 MB/s
6000: 2019-02-28 01:32:20 INFO: status: 3% (18693750784/622770257920), sparse 0% (565837824), duration 1926, read/write 9/9 MB/s
6000: 2019-02-28 01:39:44 INFO: status: 4% (24919932928/622770257920), sparse 0% (726462464), duration 2370, read/write 14/13 MB/s
6000: 2019-02-28 01:48:11 INFO: status: 5% (31157518336/622770257920), sparse 0% (771772416), duration 2877, read/write 12/12 MB/s
6000: 2019-02-28 01:55:51 INFO: status: 6% (37395103744/622770257920), sparse 0% (794177536), duration 3337, read/write 13/13 MB/s
6000: 2019-02-28 02:02:08 INFO: status: 7% (43606081536/622770257920), sparse 0% (2925768704), duration 3714, read/write 16/10 MB/s
6000: 2019-02-28 02:07:05 INFO: status: 8% (49828462592/622770257920), sparse 0% (6079778816), duration 4011, read/write 20/10 MB/s
6000: 2019-02-28 02:12:39 INFO: status: 9% (56073650176/622770257920), sparse 1% (9208709120), duration 4345, read/write 18/9 MB/s
6000: 2019-02-28 02:17:33 INFO: status: 10% (62303633408/622770257920), sparse 1% (12329951232), duration 4639, read/write 21/10 MB/s
6000: 2019-02-28 02:19:51 INFO: status: 11% (68518412288/622770257920), sparse 2% (15431639040), duration 4777, read/write 45/22 MB/s
6000: 2019-02-28 02:22:25 INFO: status: 12% (74782605312/622770257920), sparse 2% (18547863552), duration 4931, read/write 40/20 MB/s
6000: 2019-02-28 02:26:13 INFO: status: 13% (80966975488/622770257920), sparse 3% (21642326016), duration 5159, read/write 27/13 MB/s
6000: 2019-02-28 02:31:16 INFO: status: 14% (87238770688/622770257920), sparse 3% (24802742272), duration 5462, read/write 20/10 MB/s
6000: 2019-02-28 02:33:50 INFO: status: 15% (93565091840/622770257920), sparse 4% (28831207424), duration 5616, read/write 41/14 MB/s
6000: 2019-02-28 02:33:58 INFO: status: 16% (99906486272/622770257920), sparse 5% (35172601856), duration 5624, read/write 792/0 MB/s
6000: 2019-02-28 02:34:06 INFO: status: 17% (106389045248/622770257920), sparse 6% (41655160832), duration 5632, read/write 810/0 MB/s
6000: 2019-02-28 02:34:13 INFO: status: 18% (112608018432/622770257920), sparse 7% (47874125824), duration 5639, read/write 888/0 MB/s
6000: 2019-02-28 02:34:22 INFO: status: 19% (118684254208/622770257920), sparse 8% (53950361600), duration 5648, read/write 675/0 MB/s
6000: 2019-02-28 02:34:32 INFO: status: 20% (124720513024/622770257920), sparse 9% (59986620416), duration 5658, read/write 603/0 MB/s
6000: 2019-02-28 02:34:42 INFO: status: 21% (131303866368/622770257920), sparse 10% (66569973760), duration 5668, read/write 658/0 MB/s
6000: 2019-02-28 02:34:51 INFO: status: 22% (137300410368/622770257920), sparse 11% (72566517760), duration 5677, read/write 666/0 MB/s
6000: 2019-02-28 02:35:01 INFO: status: 23% (143837364224/622770257920), sparse 12% (79103471616), duration 5687, read/write 653/0 MB/s
6000: 2019-02-28 02:35:11 INFO: status: 24% (149618032640/622770257920), sparse 13% (84884140032), duration 5697, read/write 578/0 MB/s
6000: 2019-02-28 02:35:20 INFO: status: 25% (156077916160/622770257920), sparse 14% (91344023552), duration 5706, read/write 717/0 MB/s
6000: 2019-02-28 02:35:30 INFO: status: 26% (162311962624/622770257920), sparse 15% (97578070016), duration 5716, read/write 623/0 MB/s
6000: 2019-02-28 02:35:41 INFO: status: 27% (168624783360/622770257920), sparse 16% (103890890752), duration 5727, read/write 573/0 MB/s
6000: 2019-02-28 02:35:50 INFO: status: 28% (174584692736/622770257920), sparse 17% (109850796032), duration 5736, read/write 662/0 MB/s
6000: 2019-02-28 02:36:00 INFO: status: 29% (180736229376/622770257920), sparse 18% (116002332672), duration 5746, read/write 615/0 MB/s
6000: 2019-02-28 02:36:11 INFO: status: 30% (187030634496/622770257920), sparse 19% (122296737792), duration 5757, read/write 572/0 MB/s
6000: 2019-02-28 02:36:21 INFO: status: 31% (193775665152/622770257920), sparse 20% (129041768448), duration 5767, read/write 674/0 MB/s
6000: 2019-02-28 02:36:32 INFO: status: 32% (199822409728/622770257920), sparse 21% (135088513024), duration 5778, read/write 549/0 MB/s
6000: 2019-02-28 02:36:41 INFO: status: 33% (205642006528/622770257920), sparse 22% (140908109824), duration 5787, read/write 646/0 MB/s
6000: 2019-02-28 02:36:52 INFO: status: 34% (211755466752/622770257920), sparse 23% (147021570048), duration 5798, read/write 555/0 MB/s
6000: 2019-02-28 02:37:02 INFO: status: 35% (218664927232/622770257920), sparse 24% (153931030528), duration 5808, read/write 690/0 MB/s
6000: 2019-02-28 02:37:11 INFO: status: 36% (224393166848/622770257920), sparse 25% (159659270144), duration 5817, read/write 636/0 MB/s
6000: 2019-02-28 02:37:21 INFO: status: 37% (230927499264/622770257920), sparse 26% (166193602560), duration 5827, read/write 653/0 MB/s
6000: 2019-02-28 02:37:29 INFO: status: 38% (236880461824/622770257920), sparse 27% (172146565120), duration 5835, read/write 744/0 MB/s
6000: 2019-02-28 02:37:40 INFO: status: 39% (243509755904/622770257920), sparse 28% (178775859200), duration 5846, read/write 602/0 MB/s
6000: 2019-02-28 02:37:50 INFO: status: 40% (249842499584/622770257920), sparse 29% (185108602880), duration 5856, read/write 633/0 MB/s
6000: 2019-02-28 02:37:57 INFO: status: 41% (255642107904/622770257920), sparse 30% (190908211200), duration 5863, read/write 828/0 MB/s
6000: 2019-02-28 02:38:08 INFO: status: 42% (262000869376/622770257920), sparse 31% (197266972672), duration 5874, read/write 578/0 MB/s
6000: 2019-02-28 02:38:17 INFO: status: 43% (268547194880/622770257920), sparse 32% (203813298176), duration 5883, read/write 727/0 MB/s
6000: 2019-02-28 02:38:24 INFO: status: 44% (274864799744/622770257920), sparse 33% (210130903040), duration 5890, read/write 902/0 MB/s
6000: 2019-02-28 02:38:32 INFO: status: 45% (280971116544/622770257920), sparse 34% (216237219840), duration 5898, read/write 763/0 MB/s
6000: 2019-02-28 02:38:40 INFO: status: 46% (286995382272/622770257920), sparse 35% (222261485568), duration 5906, read/write 753/0 MB/s
6000: 2019-02-28 02:38:48 INFO: status: 47% (293382127616/622770257920), sparse 36% (228648230912), duration 5914, read/write 798/0 MB/s
6000: 2019-02-28 02:38:55 INFO: status: 48% (299619713024/622770257920), sparse 37% (234885816320), duration 5921, read/write 891/0 MB/s
6000: 2019-02-28 02:39:03 INFO: status: 49% (305865359360/622770257920), sparse 38% (241131462656), duration 5929, read/write 780/0 MB/s
6000: 2019-02-28 02:39:12 INFO: status: 50% (311604674560/622770257920), sparse 39% (246870777856), duration 5938, read/write 637/0 MB/s
6000: 2019-02-28 02:39:24 INFO: status: 51% (318221320192/622770257920), sparse 40% (253487423488), duration 5950, read/write 551/0 MB/s
6000: 2019-02-28 02:39:33 INFO: status: 52% (324376068096/622770257920), sparse 41% (259642171392), duration 5959, read/write 683/0 MB/s
6000: 2019-02-28 02:39:40 INFO: status: 53% (330400792576/622770257920), sparse 42% (265666895872), duration 5966, read/write 860/0 MB/s
6000: 2019-02-28 02:39:49 INFO: status: 54% (336519888896/622770257920), sparse 43% (271785992192), duration 5975, read/write 679/0 MB/s
6000: 2019-02-28 02:39:57 INFO: status: 55% (342970662912/622770257920), sparse 44% (278236766208), duration 5983, read/write 806/0 MB/s
6000: 2019-02-28 02:40:03 INFO: status: 56% (348984049664/622770257920), sparse 45% (284250152960), duration 5989, read/write 1002/0 MB/s
6000: 2019-02-28 02:40:12 INFO: status: 57% (355800055808/622770257920), sparse 46% (291066159104), duration 5998, read/write 757/0 MB/s
6000: 2019-02-28 02:40:19 INFO: status: 58% (361216999424/622770257920), sparse 47% (296483102720), duration 6005, read/write 773/0 MB/s
6000: 2019-02-28 02:40:29 INFO: status: 59% (367500525568/622770257920), sparse 48% (302766628864), duration 6015, read/write 628/0 MB/s
6000: 2019-02-28 02:40:37 INFO: status: 60% (373835694080/622770257920), sparse 49% (309101797376), duration 6023, read/write 791/0 MB/s
6000: 2019-02-28 02:40:47 INFO: status: 61% (380248391680/622770257920), sparse 50% (315514494976), duration 6033, read/write 641/0 MB/s
6000: 2019-02-28 02:40:56 INFO: status: 62% (386613772288/622770257920), sparse 51% (321879875584), duration 6042, read/write 707/0 MB/s
6000: 2019-02-28 02:41:04 INFO: status: 63% (393072738304/622770257920), sparse 52% (328338841600), duration 6050, read/write 807/0 MB/s
6000: 2019-02-28 02:41:11 INFO: status: 64% (398657257472/622770257920), sparse 53% (333923360768), duration 6057, read/write 797/0 MB/s
6000: 2019-02-28 02:41:20 INFO: status: 65% (405237727232/622770257920), sparse 54% (340503830528), duration 6066, read/write 731/0 MB/s
6000: 2019-02-28 02:41:29 INFO: status: 66% (411244756992/622770257920), sparse 55% (346510860288), duration 6075, read/write 667/0 MB/s
6000: 2019-02-28 02:41:38 INFO: status: 67% (417780596736/622770257920), sparse 56% (353046700032), duration 6084, read/write 726/0 MB/s
6000: 2019-02-28 02:41:45 INFO: status: 68% (424056979456/622770257920), sparse 57% (359323082752), duration 6091, read/write 896/0 MB/s
6000: 2019-02-28 02:41:53 INFO: status: 69% (430218412032/622770257920), sparse 58% (365484515328), duration 6099, read/write 770/0 MB/s
6000: 2019-02-28 02:42:02 INFO: status: 70% (436334034944/622770257920), sparse 59% (371600138240), duration 6108, read/write 679/0 MB/s
6000: 2019-02-28 02:42:10 INFO: status: 71% (442243416064/622770257920), sparse 60% (377509519360), duration 6116, read/write 738/0 MB/s
6000: 2019-02-28 02:42:20 INFO: status: 72% (449058308096/622770257920), sparse 61% (384324411392), duration 6126, read/write 681/0 MB/s
6000: 2019-02-28 02:42:30 INFO: status: 73% (455324139520/622770257920), sparse 62% (390590242816), duration 6136, read/write 626/0 MB/s
6000: 2019-02-28 02:42:37 INFO: status: 74% (461016268800/622770257920), sparse 63% (396282372096), duration 6143, read/write 813/0 MB/s
6000: 2019-02-28 02:42:46 INFO: status: 75% (467120029696/622770257920), sparse 64% (402386132992), duration 6152, read/write 678/0 MB/s
6000: 2019-02-28 02:42:54 INFO: status: 76% (473469616128/622770257920), sparse 65% (408735719424), duration 6160, read/write 793/0 MB/s
6000: 2019-02-28 02:43:01 INFO: status: 77% (479824707584/622770257920), sparse 66% (415090810880), duration 6167, read/write 907/0 MB/s
6000: 2019-02-28 02:43:10 INFO: status: 78% (486193758208/622770257920), sparse 67% (421459861504), duration 6176, read/write 707/0 MB/s
6000: 2019-02-28 02:43:19 INFO: status: 79% (492270649344/622770257920), sparse 68% (427536752640), duration 6185, read/write 675/0 MB/s
6000: 2019-02-28 02:43:27 INFO: status: 80% (498542575616/622770257920), sparse 69% (433808678912), duration 6193, read/write 783/0 MB/s
6000: 2019-02-28 02:43:36 INFO: status: 81% (504597512192/622770257920), sparse 70% (439863615488), duration 6202, read/write 672/0 MB/s
6000: 2019-02-28 02:43:45 INFO: status: 82% (511249350656/622770257920), sparse 71% (446515453952), duration 6211, read/write 739/0 MB/s
6000: 2019-02-28 02:43:54 INFO: status: 83% (517192351744/622770257920), sparse 72% (452458455040), duration 6220, read/write 660/0 MB/s
6000: 2019-02-28 02:44:04 INFO: status: 84% (523888885760/622770257920), sparse 73% (459154989056), duration 6230, read/write 669/0 MB/s
6000: 2019-02-28 02:44:10 INFO: status: 85% (529680760832/622770257920), sparse 74% (464946864128), duration 6236, read/write 965/0 MB/s
6000: 2019-02-28 02:44:18 INFO: status: 86% (536231149568/622770257920), sparse 75% (471497252864), duration 6244, read/write 818/0 MB/s
6000: 2019-02-28 02:44:27 INFO: status: 87% (542064574464/622770257920), sparse 76% (477330677760), duration 6253, read/write 648/0 MB/s
6000: 2019-02-28 02:44:37 INFO: status: 88% (548319789056/622770257920), sparse 77% (483585892352), duration 6263, read/write 625/0 MB/s
6000: 2019-02-28 02:44:45 INFO: status: 89% (554933223424/622770257920), sparse 78% (490199326720), duration 6271, read/write 826/0 MB/s
6000: 2019-02-28 02:44:53 INFO: status: 90% (560650911744/622770257920), sparse 79% (495917015040), duration 6279, read/write 714/0 MB/s
6000: 2019-02-28 02:45:02 INFO: status: 91% (567427858432/622770257920), sparse 80% (502693961728), duration 6288, read/write 752/0 MB/s
6000: 2019-02-28 02:45:09 INFO: status: 92% (573400678400/622770257920), sparse 81% (508666781696), duration 6295, read/write 853/0 MB/s
6000: 2019-02-28 02:45:17 INFO: status: 93% (579725557760/622770257920), sparse 82% (514991661056), duration 6303, read/write 790/0 MB/s
6000: 2019-02-28 02:45:25 INFO: status: 94% (585505439744/622770257920), sparse 83% (520771543040), duration 6311, read/write 722/0 MB/s
6000: 2019-02-28 02:45:35 INFO: status: 95% (591827894272/622770257920), sparse 84% (527093997568), duration 6321, read/write 632/0 MB/s
6000: 2019-02-28 02:45:43 INFO: status: 96% (598421471232/622770257920), sparse 85% (533687574528), duration 6329, read/write 824/0 MB/s
6000: 2019-02-28 02:45:52 INFO: status: 97% (604676620288/622770257920), sparse 86% (539942723584), duration 6338, read/write 695/0 MB/s
6000: 2019-02-28 02:46:00 INFO: status: 98% (610330673152/622770257920), sparse 87% (545596776448), duration 6346, read/write 706/0 MB/s
6000: 2019-02-28 02:46:07 INFO: status: 99% (616750907392/622770257920), sparse 88% (552017010688), duration 6353, read/write 917/0 MB/s
6000: 2019-02-28 02:46:17 INFO: status: 100% (622770257920/622770257920), sparse 89% (558036357120), duration 6363, read/write 601/0 MB/s
6000: 2019-02-28 02:46:17 INFO: transferred 622770 MB in 6363 seconds (97 MB/s)
6000: 2019-02-28 02:46:17 INFO: archive file size: 46.60GB
6000: 2019-02-28 02:46:19 INFO: Finished Backup of VM 6000 (02:46:16)
 
Hi,

doing a quick calculation, your big "slow" backups run at the same speed as your small "fast" backups. Just devide the amount of data with the amount of time. Result is around 0.5 GB per minute in each case. None run faster and none run slower.

Answers:
1. The speed of source as well as destination disks is what limits your backup speed. Seem your storage is to slow for your wishes.
2. You can measure the performance using fio and pveperv. Also stop all services except for the one you are benchmarking. If you need to compare your results to others, share them here. Look for my other posts for example fio tests.

I also noticed this in your logs:
"
6000: 2019-02-28 01:00:04 ERROR: VM 6000 qmp command 'guest-fsfreeze-freeze' failed - got timeout
6000: 2019-02-28 01:00:14 ERROR: VM 6000 qmp command 'guest-fsfreeze-thaw' failed - got timeout
"

Which means either that you do not have qemu guest agent running, or it is not working or the storage is to slow for it to respond in timely manner.

Also please note that your backups can take longer, if you have more data (files) to backup than you did before, even if your workload remains the same. You can also take a look at yearly IOWAIT graph, to see how (un)responsive is the storage getting over time.
 
  • Like
Reactions: guletz
Hi,

You must understand that any file-level backup (archive like lzo), depends a lot of how many files do you have in your vm. With many files, most of the time, you will have search/open/close file operation. Also the cpu host could influence the backup time.

As I see that your VM is on zfs, you can try before start your backup to make a ls -r inside VM, so the fs metadata will be in ARC(mostly), and then when backup will start this metadada will be read from ram and not from disk.
 
Thanks.

How do I ls inside a windows vm? :)

I have other backups that are way faster.
 

Attachments

  • SmartSelect_20190302-081713_Gmail.jpg
    SmartSelect_20190302-081713_Gmail.jpg
    77.7 KB · Views: 39
Google think that this could work:

Code:
dir /s
It was rhetorical, in theory anything you do inside the VM has not repercussions on a proxmox level activity, unless the filesystem is in constant change.

And for this VMs, do you have the same load/services as the slow VM ?
This is actually one that has very little workload, 4 times a day. 15 minutes. Last activity is around 5 pm, backups start at midnight. First activy of the day around 9 am.
 
Well, I have created 2 backup jobs to compare. One no compression, the other with LZO compression.

upload_2019-3-4_8-29-52.png
upload_2019-3-4_8-30-29.png

I am no expert at compression algorithms, but I do not think that a saving of 15gb is worth it 1 hour and 30 minutes of time wasted that makes other backups timeout.

But, another VMs do get backed up with compression in a timely manner. I am starting to think the problem is the hosts where 6000 and 6001 are hosted at the moment. These ones below are in another host of the cluster and dumping onto a USB drive, bigger load (working 24/7 with average cpu usage of 50%) and still backups much faster.
upload_2019-3-4_8-34-56.png
 
Last edited:
You must understand that any file-level backup (archive like lzo), depends a lot of how many files do you have in your vm. With many files, most of the time, you will have search/open/close file operation. Also the cpu host could influence the backup time.
 
I wanted to share with everyone my workaround for this issue.
1.- Make sure your hard disk are defrag, I know it sounds silly, but this was the single most important thing to make backups fast
2.- Reboot the VM(if windows) on safe mode, and reboot in normal mode, this makes the vm (for some reason) perform much faster, like it had new installation
3.- if you are storing large amount of files (like a share file server) do a dir of the directory where the files are , once a week if good,daily better) , to me, it makes no sense, but this makes the backup much faster
 
Hi Manny

1.- Make sure your hard disk are defrag, I know it sounds silly, but this was the single most important thing to make backups fast
If you will dfrag your FS guest who is using zfs as a backend, you will make a bad thing, because you will make a lot of write op.

3.- if you are storing large amount of files (like a share file server) do a dir of the directory where the files are , once a week if good,daily better) , to me, it makes no sense, but this makes the backup much faster

This could make sense if VM is on zfs. If you do it(recursiv dir) then the VM metadata will be in ARC / L2ARC or at least some part of them. When you will start the backup, this metdata that was not evicted from ARC /L2ARC will not be read from disks, but form ARC /L2ARC(so it will be a very fast read). For this resons you can make schedule who will run before backup schedule so you will be able to maximize the chance that most part of the metadata to be in ARC / L2ARC.
 
Hi Manny


If you will dfrag your FS guest who is using zfs as a backend, you will make a bad thing, because you will make a lot of write op.

A bad thing is only bad if it does not accomplish what needs to accomplish, otherwise is just normal work, hard drives are there to be used, as ram is, there is no benefit to have a system not using all the capabilities it has.

This could make sense if VM is on zfs. If you do it(recursiv dir) then the VM metadata will be in ARC / L2ARC or at least some part of them. When you will start the backup, this metdata that was not evicted from ARC /L2ARC will not be read from disks, but form ARC /L2ARC(so it will be a very fast read). For this resons you can make schedule who will run before backup schedule so you will be able to maximize the chance that most part of the metadata to be in ARC / L2ARC.

Your point?

Maybe it is lost in translations but your way of commenting is very confrontational, not appreciated..
 
A bad thing is only bad if it does not accomplish what needs to accomplish, otherwise is just normal work, hard drives are there to be used, as ram is, there is no benefit to have a system not using all the capabilities it has.

Sorry for my luck of details. I do not know if defrag will help or no, but what I try to say is that when you start a defrag task on guest, then a lot of write op will be happens and iops will be consumed by defrag task. At the same time as zfs is cow, any write will consume more disk blocks from your pool for the same used disk space (from guest perspective the total disk space will be the same, but from zfs point of view will be a bigger size). Maybe for a zfs mirror will be ok, but for a raidzX will be worse. Also the zfs snapshots will use much space on disk, vzdump backup will also increase.

Maybe it is lost in translations but your way of commenting is very confrontational, not appreciated..

Yes my english is bad. But in any case I do not want to be confrontational. But we are in a free world and anybody can say what he think (you included here). I am truly sorry if I have been misunderstood in this post, but it was not my intention.

Good luck
 
Last edited:
Sorry for my luck of details. I do not know if defrag will help or no, but what I try to say is that when you start a defrag task on guest, then a lot of write op will be happens and iops will be consumed by defrag task. At the same time as zfs is cow, any write will consume more disk blocks from your pool for the same used disk space (from guest perspective the total disk space will be the same, but from zfs point of view will be a bigger size). Maybe for a zfs mirror will be ok, but for a raidzX will be worse. Also the zfs snapshots will use much space on disk, vzdump backup will also increase.



Yes my english is bad. But in any case I do not want to be confrontational. But we are in a free world and anybody can say what he think (you included here). I am truly sorry if I have misunderstood in this post, but it was not my intention.

Good luck

Not true about the size of the vzdump, I have kept very detailed logs of the size and the time used in backups, and the space not only is the same, in most cases, but in some cases smaller and changing compression also helped, I have gone with None
upload_2019-5-17_10-25-28.png
Since I am not concerned with space.

About the space used on the raid / drives , as long as you still have room, that is what the space is for, to be used. If you need more hard drive space, that is a cheap solution, time of users is not cheap.

I appreciate what you think you are doing, but in reality you are not adding any benefit to the conversation, what I am saying is what I did (me, myself) and what has worked, proven by scientific method o test /documentation, this is PART of my very detailed logs to validate what I said.
upload_2019-5-17_10-18-28.png

Again, I understand that some times, translation and tone do not get thru in the written word, but trying to debate what I said worked for me is actually confrontational, unless you can prove it does not work.

And this is my last reply. Again this is what I did, what worked for me, how I got the backups up to speed again and how it is working perfectly fine for me. This may not work for you, I am not telling anyone to do it, not promoting this solution to anyone. Just that this worked for ME .
 
  • Like
Reactions: Atila Vasconcelos
Manny, are you still using USB drives for your backups? I believe this is a very bad idea and may lead to file corruption issues. There are some threads on here directly related to this issue.
 
Manny, are you still using USB drives for your backups? I believe this is a very bad idea and may lead to file corruption issues. There are some threads on here directly related to this issue.

Yes... they work perfectly fine.. I have used the backups many times and not one problem.. Of course this is not the only backup I have since I use also a NAS and also local backups..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!