Proxmox Backup Fails with error ERROR: vma_queue_write: write error - Broken pipe

nio707

Member
Feb 10, 2021
11
0
6
63
Hello All,

I am having problem with the scheduled Proxmox backup in the evening to 4 tB USB drive.

Here is the error

INFO: 7% (70.0 GiB of 1000.0 GiB) in 26m 33s, read: 45.7 MiB/s, write: 44.5 MiB/s
INFO: 8% (80.0 GiB of 1000.0 GiB) in 30m 28s, read: 43.4 MiB/s, write: 43.2 MiB/s
INFO: 9% (90.1 GiB of 1000.0 GiB) in 35m 48s, read: 32.1 MiB/s, write: 31.1 MiB/s
INFO: 10% (100.0 GiB of 1000.0 GiB) in 39m 34s, read: 45.1 MiB/s, write: 44.7 MiB/s
INFO: 11% (110.0 GiB of 1000.0 GiB) in 43m 48s, read: 40.4 MiB/s, write: 39.2 MiB/s
INFO: 12% (120.0 GiB of 1000.0 GiB) in 47m 41s, read: 44.0 MiB/s, write: 44.0 MiB/s
zstd: error 25 : Write error : No space left on device (cannot write compressed block)
INFO: 12% (125.7 GiB of 1000.0 GiB) in 49m 54s, read: 43.4 MiB/s, write: 43.3 MiB/s
ERROR: vma_queue_write: write error - Broken pipe
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 100 failed - vma_queue_write: write error - Broken pipe
INFO: Failed at 2024-02-19 19:20:02
INFO: Backup job finished with errors

TASK ERROR: job errors


Here are the output of df -hl

root@pve:~# df -hl
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 157M 1.4G 10% /run
/dev/mapper/pve-root 96G 2.5G 94G 3% /
tmpfs 7.8G 43M 7.7G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 511M 332K 511M 1% /boot/efi
/dev/fuse 30M 20K 30M 1% /etc/pve
tmpfs 1.6G 0 1.6G 0% /run/user/0


Here is the vzdump configuration

root@pve:/mnt/usbdrive# cat /etc/vzdump.conf
# vzdump default settings

#tmpdir: DIR
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#stdexcludes: BOOLEAN
#mailto: ADDRESSLIST
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST
#pigz: N
 
Hi,

zstd: error 25 : Write error : No space left on device (cannot write compressed block)
Your external drive simply seems to be full.

df -hl only shows mounted drives, and from its perspective it doesn't see the zvol's as mounted, since they don't have a mount point.
You can get the current disk usage with zfs list when the drive is attached
 
Hi,


Your external drive simply seems to be full.

df -hl only shows mounted drives, and from its perspective it doesn't see the zvol's as mounted, since they don't have a mount point.
You can get the current disk usage with zfs list when the drive is attached
I don't have Zfs file system, rather Xfs.
 
Hello All,

I am having problem with the scheduled Proxmox backup in the evening to 4 tB USB drive.

Here is the error

Code:
zstd: error 25 : Write error : No space left on device (cannot write compressed block)


Here are the output of df -hl

Code:
root@pve:~# df -hl
Filesystem            Size  Used Avail Use% Mounted on
udev                  7.7G     0  7.7G   0% /dev
tmpfs                 1.6G  157M  1.4G  10% /run
/dev/mapper/pve-root   96G  2.5G   94G   3% /
tmpfs                 7.8G   43M  7.7G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda2             511M  332K  511M   1% /boot/efi
/dev/fuse              30M   20K   30M   1% /etc/pve
tmpfs                 1.6G     0  1.6G   0% /run/user/0

Which one is the backup drive?
 
Which one is the backup drive?
You are right, I checked my fstab file

Code:
root@pve:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / xfs defaults 0 1
UUID=5798-5AD7 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/sdb1 /mnt/usbdrive ext4 defaults 0 0
#/dev/disk/by-uuid/8f8be204-21e8-4673-8e8f-f8874125fa95 /mnt/usbdrive auto 0 0
#LABEL=backup       /mnt/backup     ext4    noauto,x-systemd.automount      0 0

I was looking at lsblk output
Code:
root@pve:~# lsblk
NAME                                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                         8:0    0  1.8T  0 disk
├─sda1                                      8:1    0 1007K  0 part
├─sda2                                      8:2    0  512M  0 part /boot/efi
└─sda3                                      8:3    0  1.8T  0 part
  ├─pve-swap                              253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                              253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta                        253:2    0 15.8G  0 lvm 
  │ └─pve-data-tpool                      253:4    0  1.7T  0 lvm 
  │   ├─pve-data                          253:5    0  1.7T  0 lvm 
  │   ├─pve-vm--100--disk--0              253:6    0 1000G  0 lvm 
  │   ├─pve-vm--103--disk--0              253:7    0   67G  0 lvm 
  │   └─pve-vm--100--state--Afterrecovery 253:8    0 16.5G  0 lvm 
  └─pve-data_tdata                        253:3    0  1.7T  0 lvm 
    └─pve-data-tpool                      253:4    0  1.7T  0 lvm 
      ├─pve-data                          253:5    0  1.7T  0 lvm 
      ├─pve-vm--100--disk--0              253:6    0 1000G  0 lvm 
      ├─pve-vm--103--disk--0              253:7    0   67G  0 lvm 
      └─pve-vm--100--state--Afterrecovery 253:8    0 16.5G  0 lvm 
sdb                                         8:16   0  3.7T  0 disk
└─sdb1                                      8:17   0  3.7T  0 part

/dev/sdb1 seems not mounted properly. But I can create file/folder and delete on /mnt/usbdrive and even can see the logs, and read it.

Is this the reason, why my pve VM size increased too quickly in a short period of time due to failed backups, for which I was planning to allocate another 500Gb of space to the existing 1000Gb. I postponed to expand since I want to first take the backup of the VM first.

Here is the output of lvdisplay

Code:
root@pve:/mnt/usbdrive# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                kMvAu4-E4Db-OUw7-eSjJ-EpVz-4AkV-khJg09
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-06-23 12:02:23 +0530
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                imINDq-kqBs-4LHm-kLGL-eWpO-nlyQ-EQ8JZq
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-06-23 12:02:24 +0530
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                dd1Vfc-EUAx-g0s9-rjRJ-kRND-g3Rw-y6oALk
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-06-23 12:02:24 +0530
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 4
  LV Size                <1.67 TiB
  Allocated pool data    59.29%
  Allocated metadata     3.28%
  Current LE             437759
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
   
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                Qu9lwk-3dor-xVyI-M80d-bi7v-KF8b-WD4UNW
  LV Write Access        read/write
  LV Creation host, time pve, 2021-06-23 13:25:00 +0530
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                1000.00 GiB
  Mapped size            98.08%
  Current LE             256000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
   
  --- Logical volume ---
  LV Path                /dev/pve/vm-103-disk-0
  LV Name                vm-103-disk-0
  VG Name                pve
  LV UUID                zWAsr8-pMfV-rI4L-vEWd-uhys-vhvS-SGHmGQ
  LV Write Access        read/write
  LV Creation host, time pve, 2021-06-23 14:24:01 +0530
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                67.00 GiB
  Mapped size            46.28%
  Current LE             17152
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
   
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-state-Afterrecovery
  LV Name                vm-100-state-Afterrecovery
  VG Name                pve
  LV UUID                fEVI48-spSe-8x8o-uVzO-XgAg-FjWB-f1dDqJ
  LV Write Access        read/write
  LV Creation host, time pve, 2021-06-25 10:00:08 +0530
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                <16.49 GiB
  Mapped size            12.42%
  Current LE             4221
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8

Couple of questions

1. the size of /dev/pve/root is 98Gb (100Gb) and I suppose the zipping happens here. Does this size matter, if the VM size is huge (>1TB).

2. I intend to setup PBS for the backup, since the USB HDD backup is cumbersome slow, and number of backup's are limited. But, is it possible for the PBS to be used for other purpose, like other backup purpose.
 
Last edited:
You are right, I checked my fstab file

Code:
root@pve:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / xfs defaults 0 1
UUID=5798-5AD7 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/sdb1 /mnt/usbdrive ext4 defaults 0 0
#/dev/disk/by-uuid/8f8be204-21e8-4673-8e8f-f8874125fa95 /mnt/usbdrive auto 0 0
#LABEL=backup       /mnt/backup     ext4    noauto,x-systemd.automount      0 0

I was looking at lsblk output
Code:
root@pve:~# lsblk
NAME                                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                         8:0    0  1.8T  0 disk
├─sda1                                      8:1    0 1007K  0 part
├─sda2                                      8:2    0  512M  0 part /boot/efi
└─sda3                                      8:3    0  1.8T  0 part
  ├─pve-swap                              253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                              253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta                        253:2    0 15.8G  0 lvm
  │ └─pve-data-tpool                      253:4    0  1.7T  0 lvm
  │   ├─pve-data                          253:5    0  1.7T  0 lvm
  │   ├─pve-vm--100--disk--0              253:6    0 1000G  0 lvm
  │   ├─pve-vm--103--disk--0              253:7    0   67G  0 lvm
  │   └─pve-vm--100--state--Afterrecovery 253:8    0 16.5G  0 lvm
  └─pve-data_tdata                        253:3    0  1.7T  0 lvm
    └─pve-data-tpool                      253:4    0  1.7T  0 lvm
      ├─pve-data                          253:5    0  1.7T  0 lvm
      ├─pve-vm--100--disk--0              253:6    0 1000G  0 lvm
      ├─pve-vm--103--disk--0              253:7    0   67G  0 lvm
      └─pve-vm--100--state--Afterrecovery 253:8    0 16.5G  0 lvm
sdb                                         8:16   0  3.7T  0 disk
└─sdb1                                      8:17   0  3.7T  0 part

/dev/sdb1 seems not mounted properly. But I can create file/folder and delete on /mnt/usbdrive and even can see the logs, and read it.

I think you are simply writing into regular /mnt/usbdrive directory on the underlying fs which is mounted at /.

EDIT: You can prevent this in the future if you do chattr +i /mnt/usbdrive while unmounted.

Is this the reason, why my pve VM size increased too quickly in a short period of time due to failed backups, for which I was planning to allocate another 500Gb of space to the existing 1000Gb. I postponed to expand since I want to first take the backup of the VM first.

Here is the output of lvdisplay

Code:
root@pve:/mnt/usbdrive# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                kMvAu4-E4Db-OUw7-eSjJ-EpVz-4AkV-khJg09
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-06-23 12:02:23 +0530
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                imINDq-kqBs-4LHm-kLGL-eWpO-nlyQ-EQ8JZq
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-06-23 12:02:24 +0530
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
 
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                dd1Vfc-EUAx-g0s9-rjRJ-kRND-g3Rw-y6oALk
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-06-23 12:02:24 +0530
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 4
  LV Size                <1.67 TiB
  Allocated pool data    59.29%
  Allocated metadata     3.28%
  Current LE             437759
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                Qu9lwk-3dor-xVyI-M80d-bi7v-KF8b-WD4UNW
  LV Write Access        read/write
  LV Creation host, time pve, 2021-06-23 13:25:00 +0530
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                1000.00 GiB
  Mapped size            98.08%
  Current LE             256000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-103-disk-0
  LV Name                vm-103-disk-0
  VG Name                pve
  LV UUID                zWAsr8-pMfV-rI4L-vEWd-uhys-vhvS-SGHmGQ
  LV Write Access        read/write
  LV Creation host, time pve, 2021-06-23 14:24:01 +0530
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                67.00 GiB
  Mapped size            46.28%
  Current LE             17152
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-state-Afterrecovery
  LV Name                vm-100-state-Afterrecovery
  VG Name                pve
  LV UUID                fEVI48-spSe-8x8o-uVzO-XgAg-FjWB-f1dDqJ
  LV Write Access        read/write
  LV Creation host, time pve, 2021-06-25 10:00:08 +0530
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                <16.49 GiB
  Mapped size            12.42%
  Current LE             4221
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8

I don't think I understood you here. I might not have the full context. The USB is not LVM, is it?

Couple of questions

1. the size of /dev/pve/root is 98Gb (100Gb) and I suppose the zipping happens here. Does this size matter, if the VM size is huge (>1TB).

I don't relly use vzdump all that much so someone else might chip in, but if it uses /tmp it would indeed be using your / space.

2. I intend to setup PBS for the backup, since the USB HDD backup is cumbersome slow, and number of backup's are limited. But, is it possible for the PBS to be used for other purpose, like other backup purpose.

I don't use PBS either because it did not play nice with ZFS (as I would have imagined). But since PBS is yet another Debian install, you can always create shares, etc there. If you ask whether it has nice GUI, I would simply explore it at least in a VM to see.
 
Last edited:
  • Like
Reactions: nio707
I think you are simply writing into regular /mnt/usbdrive directory on the underlying fs which is mounted at /.

EDIT: You can prevent this in the future if you do chattr +i /mnt/usbdrive while unmounted.
Can you explain the reason for making it immutable.

1. Should I umount /mnt/usbdrive and mount -a to mount the usbdrive (I am managing it remotely)
2. How to find the phantom vzdump file to get back my lost space.
I don't use PBS either because it did not play nice with ZFS (as I would have imagined). But since PBS is yet another Debian install, you can always create shares, etc there. If you ask whether it has nice GUI, I would simply explore it at least in a VM to see.
Yes that will do. I actually needs the general backup for few desktop's important files/folder connected to nfs using rsync script.
 
Last edited:
Can you explain the reason for making it immutable.

Well I suppose you want to prevent a situation (in the future) to start writing into the directory unless it is mounted. If it's immutable it would fail. It would fail all the time UNLESS something is mounted over it.

1. Should I umount /mnt/usbdrive and mount -a to mount the usbdrive (I am managing it remotely)

You want to set it immutable when it is NOT mounted, so if you have to umount, that's what you have to do.

Then mount -a is to mount what is in yout fstab, your drive is there but it is as /dev/sdb1, which might have changed across reboots, it's also kind of dangerous for your use case to mount that way. You would be better off with mounting by /dev/disk/by-uuid/... and you might find the UUID by lsblk -f.

2. How to find the phantom vzdump file to get back my lost space.

Just guessing here: du -a / 2>/dev/null | sort -n -r | grep vz

Yes that will do. I actually needs the general backup for few desktop's important files/folder connected to nfs using rsync script.

Whatever you do, do not mount disks by /dev/sd... then. Especially something removable.
 
Last edited:
  • Like
Reactions: nio707
Thanks and well explained. I was having problem with the UUID and proxmox server was not booting, and I lost the remote access of the server too.

Will update with the result.
 
Thanks and well explained. I was having problem with the UUID and proxmox server was not booting, and I lost the remote access of the server too.

Will update with the result.

You can always live boot Debian and fix your /etc/fstab. I actually prefer to use partlabels, you can set them for regular partitions. The volumes we are talking are LVM so they would be under /dev/mapper/ ... which is fine to mount as it already goes by their names.

BTW I am still not sure what you meant by filling up your VM's using /dev/pve/vm-100-disk-0, but you can use fstrim -v / inside the VM. I don't know what the VM was doing though.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!