Local disk is full while backup to remote location

redroom

New Member
Aug 31, 2021
16
1
3
Hello,
When I backup big VM (few TBs) local disk is getting filled to full capacity (I'm unable to find what actually fills it, fs is ZFS)
/var/cache folders are not filling with anything

proxmox-ve: 6.4-1
backup mode: snapshot
error: vma_queue_write: write error - Broken pipe (nothing other)
remote location is dav2fs folder

I tried to change vzdump.conf tmdir to remote location (do I need restart to apply those changes?) but it doesnt help

Are there any ways to overcome this behavior? I have enough space on remote storage
 
Last edited:
6.4 is EOL and no longer supported. please upgrade to 7.x and then post
- pveversion -v
- storage.cfg and other relevant config files
- full backup task log
 
  • Like
Reactions: redroom
6.4 is EOL and no longer supported. please upgrade to 7.x and then post
- pveversion -v
- storage.cfg and other relevant config files
- full backup task log
currently upgrade from 6.4 to 7 is stable? is it correct?
we are running production environment and if something goes wrong it'd be a disaster
 
upgraded one server to 7.3 and behavior is completely the same (as on 6.4)
zfs list shows how rpool/ROOT/pve-1 usage increasing during backup exactly the same size as backup
then backup started to transfer onto remote location and after transfer is completed proxmox clears zfs used space

here's usage before backup:
rpool/ROOT/pve-1 used:2.28G avail:3.00T

after backup (but not transfered yet to remote location):
rpool/ROOT/pve-1 used:62.7G avail:2.94T

after transfer:
rpool/ROOT/pve-1 used:2.28G avail:3.00T

Code:
proxmox-ve: 7.3-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-8
pve-kernel-5.15: 7.3-3
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-6
libpve-storage-perl: 7.3-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.2-7
pve-firmware: 3.6-4
pve-ha-manager: 3.5.1
pve-i18n: 2.8-3
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Code:
storage.cfg

dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        sparse 1

dir: storagebox-webdav
        path /mnt/sbox
        content backup
        nodes node01,node02
        prune-backups keep-last=3
        shared 0

zfspool: rpool2
        pool rpool2
        content images,rootdir
        mountpoint /rpool2
        nodes node01

dir: storagebox-mpr-webdav
        path /mnt/mpr-sbox/
        content backup
        nodes node03
        prune-backups keep-last=3
        shared 0

zfspool: rpool3
        pool rpool3
        content images,rootdir
        mountpoint /rpool3
        nodes node02

Code:
error during backup:
INFO: starting new backup job: vzdump 104 --storage storagebox-webdav --quiet 1 --node node02 --compress zstd --mode snapshot --mailnotification failure
INFO: Starting Backup of VM 104 (qemu)
INFO: Backup started at 2023-03-18 21:00:01
INFO: status = running
INFO: VM Name: dwh
INFO: include disk 'scsi0' 'local-zfs:vm-104-disk-1' 3400G
INFO: include disk 'scsi1' 'rpool3:vm-104-disk-0' 3200G
INFO: include disk 'efidisk0' 'local-zfs:vm-104-disk-0' 1M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/sbox/dump/vzdump-qemu-104-2023_03_18-21_00_01.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '64ed16c2-0060-4e08-8a3c-2deb9d74566d'
INFO: resuming VM again
INFO:   0% (967.9 MiB of 6.4 TiB) in 3s, read: 322.6 MiB/s, write: 283.5 MiB/s
INFO:   1% (66.1 GiB of 6.4 TiB) in 5m 18s, read: 211.8 MiB/s, write: 210.9 MiB/s
INFO:   2% (132.0 GiB of 6.4 TiB) in 10m 32s, read: 215.1 MiB/s, write: 215.1 MiB/s
INFO:   3% (198.1 GiB of 6.4 TiB) in 16m 7s, read: 202.0 MiB/s, write: 202.0 MiB/s
INFO:   4% (264.1 GiB of 6.4 TiB) in 20m 50s, read: 238.5 MiB/s, write: 238.5 MiB/s
INFO:   5% (330.2 GiB of 6.4 TiB) in 25m 18s, read: 252.7 MiB/s, write: 252.7 MiB/s
INFO:   6% (396.2 GiB of 6.4 TiB) in 29m 32s, read: 266.1 MiB/s, write: 266.1 MiB/s
INFO:   7% (462.0 GiB of 6.4 TiB) in 34m, read: 251.5 MiB/s, write: 251.5 MiB/s
INFO:   8% (528.2 GiB of 6.4 TiB) in 38m 43s, read: 239.5 MiB/s, write: 239.5 MiB/s
INFO:   9% (594.0 GiB of 6.4 TiB) in 43m 24s, read: 239.8 MiB/s, write: 239.7 MiB/s
INFO:  10% (660.4 GiB of 6.4 TiB) in 45m 24s, read: 566.4 MiB/s, write: 177.3 MiB/s
INFO:  11% (726.6 GiB of 6.4 TiB) in 46m 12s, read: 1.4 GiB/s, write: 0 B/s
INFO:  12% (793.4 GiB of 6.4 TiB) in 46m 58s, read: 1.5 GiB/s, write: 0 B/s
INFO:  13% (858.7 GiB of 6.4 TiB) in 47m 40s, read: 1.6 GiB/s, write: 0 B/s
INFO:  14% (924.8 GiB of 6.4 TiB) in 48m 28s, read: 1.4 GiB/s, write: 0 B/s
INFO:  15% (991.3 GiB of 6.4 TiB) in 49m 13s, read: 1.5 GiB/s, write: 0 B/s
INFO:  16% (1.0 TiB of 6.4 TiB) in 49m 58s, read: 1.5 GiB/s, write: 0 B/s
INFO:  17% (1.1 TiB of 6.4 TiB) in 50m 45s, read: 1.4 GiB/s, write: 0 B/s
INFO:  18% (1.2 TiB of 6.4 TiB) in 51m 29s, read: 1.5 GiB/s, write: 0 B/s
INFO:  19% (1.2 TiB of 6.4 TiB) in 52m 18s, read: 1.3 GiB/s, write: 0 B/s
INFO:  20% (1.3 TiB of 6.4 TiB) in 53m 3s, read: 1.5 GiB/s, write: 0 B/s
INFO:  21% (1.4 TiB of 6.4 TiB) in 53m 52s, read: 1.3 GiB/s, write: 0 B/s
INFO:  22% (1.4 TiB of 6.4 TiB) in 54m 40s, read: 1.4 GiB/s, write: 0 B/s
INFO:  23% (1.5 TiB of 6.4 TiB) in 55m 22s, read: 1.6 GiB/s, write: 0 B/s
INFO:  24% (1.5 TiB of 6.4 TiB) in 56m 7s, read: 1.5 GiB/s, write: 0 B/s
INFO:  25% (1.6 TiB of 6.4 TiB) in 56m 54s, read: 1.4 GiB/s, write: 0 B/s
INFO:  26% (1.7 TiB of 6.4 TiB) in 57m 39s, read: 1.5 GiB/s, write: 0 B/s
INFO:  27% (1.7 TiB of 6.4 TiB) in 58m 21s, read: 1.6 GiB/s, write: 0 B/s
INFO:  28% (1.8 TiB of 6.4 TiB) in 59m 3s, read: 1.6 GiB/s, write: 0 B/s
INFO:  29% (1.9 TiB of 6.4 TiB) in 59m 45s, read: 1.6 GiB/s, write: 0 B/s
INFO:  30% (1.9 TiB of 6.4 TiB) in 1h 43s, read: 1.1 GiB/s, write: 0 B/s
INFO:  31% (2.0 TiB of 6.4 TiB) in 1h 1m 48s, read: 1.0 GiB/s, write: 0 B/s
INFO:  32% (2.1 TiB of 6.4 TiB) in 1h 2m 55s, read: 1013.7 MiB/s, write: 0 B/s
INFO:  33% (2.1 TiB of 6.4 TiB) in 1h 4m 1s, read: 1020.0 MiB/s, write: 0 B/s
INFO:  34% (2.2 TiB of 6.4 TiB) in 1h 5m 9s, read: 988.9 MiB/s, write: 0 B/s
INFO:  35% (2.3 TiB of 6.4 TiB) in 1h 6m 15s, read: 1.0 GiB/s, write: 0 B/s
INFO:  36% (2.3 TiB of 6.4 TiB) in 1h 7m 21s, read: 1021.5 MiB/s, write: 0 B/s
INFO:  37% (2.4 TiB of 6.4 TiB) in 1h 8m 28s, read: 1010.6 MiB/s, write: 0 B/s
INFO:  38% (2.4 TiB of 6.4 TiB) in 1h 9m 36s, read: 988.8 MiB/s, write: 0 B/s
INFO:  39% (2.5 TiB of 6.4 TiB) in 1h 10m 43s, read: 1010.7 MiB/s, write: 0 B/s
INFO:  40% (2.6 TiB of 6.4 TiB) in 1h 11m 52s, read: 975.0 MiB/s, write: 0 B/s
INFO:  41% (2.6 TiB of 6.4 TiB) in 1h 12m 59s, read: 1011.5 MiB/s, write: 0 B/s
INFO:  42% (2.7 TiB of 6.4 TiB) in 1h 14m 7s, read: 991.4 MiB/s, write: 0 B/s
INFO:  43% (2.8 TiB of 6.4 TiB) in 1h 15m 16s, read: 980.5 MiB/s, write: 0 B/s
INFO:  44% (2.8 TiB of 6.4 TiB) in 1h 16m 28s, read: 937.8 MiB/s, write: 0 B/s
INFO:  45% (2.9 TiB of 6.4 TiB) in 1h 17m 28s, read: 1.1 GiB/s, write: 0 B/s
INFO:  46% (3.0 TiB of 6.4 TiB) in 1h 18m 28s, read: 1.1 GiB/s, write: 0 B/s
INFO:  47% (3.0 TiB of 6.4 TiB) in 1h 19m 23s, read: 1.2 GiB/s, write: 0 B/s
INFO:  48% (3.1 TiB of 6.4 TiB) in 1h 20m 23s, read: 1.1 GiB/s, write: 0 B/s
INFO:  49% (3.2 TiB of 6.4 TiB) in 1h 23m 29s, read: 363.6 MiB/s, write: 184.7 MiB/s
INFO:  50% (3.2 TiB of 6.4 TiB) in 1h 28m 37s, read: 219.1 MiB/s, write: 218.4 MiB/s
INFO:  51% (3.3 TiB of 6.4 TiB) in 1h 33m 43s, read: 221.2 MiB/s, write: 221.2 MiB/s
INFO:  52% (3.4 TiB of 6.4 TiB) in 1h 38m 32s, read: 233.6 MiB/s, write: 227.7 MiB/s
INFO:  53% (3.4 TiB of 6.4 TiB) in 1h 43m 38s, read: 220.8 MiB/s, write: 220.6 MiB/s
INFO:  54% (3.5 TiB of 6.4 TiB) in 1h 48m 56s, read: 212.6 MiB/s, write: 212.5 MiB/s
INFO:  55% (3.5 TiB of 6.4 TiB) in 1h 54m 10s, read: 215.8 MiB/s, write: 215.7 MiB/s
INFO:  56% (3.6 TiB of 6.4 TiB) in 1h 59m 17s, read: 220.3 MiB/s, write: 218.6 MiB/s
INFO:  57% (3.7 TiB of 6.4 TiB) in 2h 4m 13s, read: 227.7 MiB/s, write: 223.6 MiB/s
INFO:  58% (3.7 TiB of 6.4 TiB) in 2h 9m 19s, read: 221.1 MiB/s, write: 221.0 MiB/s
INFO:  59% (3.8 TiB of 6.4 TiB) in 2h 14m 28s, read: 218.7 MiB/s, write: 218.6 MiB/s
INFO:  60% (3.9 TiB of 6.4 TiB) in 2h 19m 3s, read: 245.4 MiB/s, write: 245.4 MiB/s
INFO:  61% (3.9 TiB of 6.4 TiB) in 2h 24m, read: 228.0 MiB/s, write: 228.0 MiB/s
INFO:  62% (4.0 TiB of 6.4 TiB) in 2h 28m 58s, read: 226.6 MiB/s, write: 226.6 MiB/s
INFO:  63% (4.1 TiB of 6.4 TiB) in 2h 33m 54s, read: 228.4 MiB/s, write: 228.3 MiB/s
INFO:  64% (4.1 TiB of 6.4 TiB) in 2h 39m 24s, read: 204.4 MiB/s, write: 204.3 MiB/s
INFO:  65% (4.2 TiB of 6.4 TiB) in 2h 44m 22s, read: 226.8 MiB/s, write: 226.8 MiB/s
INFO:  66% (4.3 TiB of 6.4 TiB) in 2h 49m 17s, read: 229.3 MiB/s, write: 229.2 MiB/s
INFO:  67% (4.3 TiB of 6.4 TiB) in 2h 54m 47s, read: 205.1 MiB/s, write: 205.1 MiB/s
INFO:  68% (4.4 TiB of 6.4 TiB) in 3h 21s, read: 202.1 MiB/s, write: 202.1 MiB/s
INFO:  69% (4.4 TiB of 6.4 TiB) in 3h 5m 52s, read: 204.5 MiB/s, write: 204.5 MiB/s
INFO:  70% (4.5 TiB of 6.4 TiB) in 3h 11m 15s, read: 209.1 MiB/s, write: 209.0 MiB/s
INFO:  71% (4.6 TiB of 6.4 TiB) in 3h 16m 30s, read: 214.5 MiB/s, write: 214.5 MiB/s
INFO:  72% (4.6 TiB of 6.4 TiB) in 3h 22m 10s, read: 198.5 MiB/s, write: 198.4 MiB/s
INFO:  73% (4.7 TiB of 6.4 TiB) in 3h 27m 39s, read: 205.6 MiB/s, write: 204.2 MiB/s
INFO:  74% (4.8 TiB of 6.4 TiB) in 3h 32m 44s, read: 222.0 MiB/s, write: 221.9 MiB/s
INFO:  75% (4.8 TiB of 6.4 TiB) in 3h 38m 24s, read: 198.5 MiB/s, write: 197.4 MiB/s
INFO:  76% (4.9 TiB of 6.4 TiB) in 3h 44m 3s, read: 199.2 MiB/s, write: 199.2 MiB/s
INFO:  77% (5.0 TiB of 6.4 TiB) in 3h 49m 37s, read: 202.7 MiB/s, write: 202.6 MiB/s
INFO:  78% (5.0 TiB of 6.4 TiB) in 3h 55m 25s, read: 194.3 MiB/s, write: 194.2 MiB/s
INFO:  79% (5.1 TiB of 6.4 TiB) in 4h 58s, read: 202.6 MiB/s, write: 202.5 MiB/s
INFO:  80% (5.2 TiB of 6.4 TiB) in 4h 7m 49s, read: 164.4 MiB/s, write: 164.4 MiB/s
INFO:  81% (5.2 TiB of 6.4 TiB) in 4h 14m 26s, read: 170.2 MiB/s, write: 170.2 MiB/s
INFO:  82% (5.3 TiB of 6.4 TiB) in 4h 20m 55s, read: 173.7 MiB/s, write: 173.7 MiB/s
INFO:  83% (5.3 TiB of 6.4 TiB) in 4h 27m 3s, read: 184.1 MiB/s, write: 184.1 MiB/s
INFO:  84% (5.4 TiB of 6.4 TiB) in 4h 33m 34s, read: 172.5 MiB/s, write: 172.4 MiB/s
INFO:  85% (5.5 TiB of 6.4 TiB) in 4h 39m 36s, read: 187.0 MiB/s, write: 187.0 MiB/s
INFO:  86% (5.5 TiB of 6.4 TiB) in 4h 45m 46s, read: 182.8 MiB/s, write: 182.8 MiB/s
INFO:  87% (5.6 TiB of 6.4 TiB) in 4h 51m 59s, read: 181.2 MiB/s, write: 181.1 MiB/s
INFO:  88% (5.7 TiB of 6.4 TiB) in 4h 58m 56s, read: 161.9 MiB/s, write: 161.9 MiB/s
INFO:  89% (5.7 TiB of 6.4 TiB) in 5h 6m 7s, read: 156.6 MiB/s, write: 156.6 MiB/s
INFO:  90% (5.8 TiB of 6.4 TiB) in 5h 13m 53s, read: 145.2 MiB/s, write: 145.2 MiB/s
INFO:  90% (5.8 TiB of 6.4 TiB) in 5h 14m 24s, read: 130.7 MiB/s, write: 130.7 MiB/s
ERROR: vma_queue_write: write error - Broken pipe
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 104 failed - vma_queue_write: write error - Broken pipe
INFO: Failed at 2023-03-19 02:16:39
INFO: Backup job finished with errors

TASK ERROR: job errors
 
Last edited:
I think it's just dav2fs doing caching / not uploading until the file is closed?

davfs2 does extensive caching to make the file system responsive, to avoid unnecessary network traffic and to prevent data loss, and to cope for slow or unreliable connections.
 
  • Like
Reactions: redroom
I think it's just dav2fs doing caching / not uploading until the file is closed?
yep, thats true, I was mistaken there's no cache
same situation with cifs

are there any options to use remote storage in proxmox without local cache?
 
NFS or CIFS definitely don't locally cache the full file ;)
 
  • Like
Reactions: redroom

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!