[SOLVED] [PVE 3.4] Problem with scheduled backup

Sep 21, 2012
30
0
6
Hi,

the scheduled backup was running for months without problems. Since 2 weeks i get the following error while using the scheduled backup. "vma_queue_write: write error - Broken pipe".

Code:
vzdump 200 --quiet 1 --mode snapshot --mailto REMOVED --compress lzo
--storage backup
 
  200: Apr 09 03:00:02 INFO: Starting Backup of VM 200 (qemu)
  200: Apr 09 03:00:02 INFO: status = running
  200: Apr 09 03:00:02 INFO: update VM 200: -lock backup
  200: Apr 09 03:00:02 INFO: backup mode: snapshot
  200: Apr 09 03:00:02 INFO: ionice priority: 7
  200: Apr 09 03:00:02 INFO: creating archive
'/pvebackup/dump/vzdump-qemu-200-2015_04_09-03_00_02.vma.lzo'
  200: Apr 09 03:00:02 INFO: started backup task '381805e5-1f55-4e55-8d7e-d5010fee47f1'
  200: Apr 09 03:00:05 INFO: status: 0% (315490304/214748364800), sparse 0%
(136912896), duration 3, 105/59 MB/s
  200: Apr 09 03:00:37 INFO: status: 1% (2204696576/214748364800), sparse 0%
(301948928), duration 35, 59/53 MB/s
  200: Apr 09 03:01:21 INFO: status: 2% (4347658240/214748364800), sparse 0%
(430415872), duration 79, 48/45 MB/s
  200: Apr 09 03:01:59 INFO: status: 3% (6476791808/214748364800), sparse 0%
(468619264), duration 117, 56/55 MB/s
  200: Apr 09 03:02:38 INFO: status: 4% (8654553088/214748364800), sparse 0%
(512241664), duration 156, 55/54 MB/s
...
(89300418560), duration 2090, 57/56 MB/s
  200: Apr 09 03:35:29 INFO: status: 76% (163311321088/214748364800), sparse 41%
(89339338752), duration 2127, 59/58 MB/s
  200: Apr 09 03:35:59 INFO: status: 77% (165362663424/214748364800), sparse 41%
(89390276608), duration 2157, 68/66 MB/s
  200: Apr 09 03:36:35 INFO: status: 78% (167548157952/214748364800), sparse 41%
(89483276288), duration 2193, 60/58 MB/s
  200: Apr 09 03:37:04 INFO: status: 78% (167757217792/214748364800), sparse 41%
(89485107200), duration 2222, 7/7 MB/s
  200: Apr 09 03:37:04 ERROR: vma_queue_write: write error - Broken pipe
  200: Apr 09 03:37:04 INFO: aborting backup job
  200: Apr 09 03:37:36 ERROR: Backup of VM 200 failed - vma_queue_write: write error
- Broken pipe

I run the same backup manually later and it works fine.

Code:
INFO: starting new backup job: vzdump 200 --remove 0 --mode snapshot --compress lzo --storage backup --node srv3
INFO: Starting Backup of VM 200 (qemu)
INFO: status = running
INFO: update VM 200: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/pvebackup/dump/vzdump-qemu-200-2015_04_09-09_08_30.vma.lzo'
INFO: started backup task 'adc0da3d-ef9c-4512-a0af-84ae3a8e3d19'
INFO: status: 0% (373096448/214748364800), sparse 0% (174628864), duration 3, 124/66 MB/s
INFO: status: 1% (2170224640/214748364800), sparse 0% (301772800), duration 32, 61/57 MB/s
INFO: status: 2% (4317315072/214748364800), sparse 0% (430530560), duration 72, 53/50 MB/s
...
INFO: status: 97% (208403693568/214748364800), sparse 56% (121007128576), duration 2710, 110/0 MB/s
INFO: status: 98% (210540494848/214748364800), sparse 57% (123143794688), duration 2730, 106/0 MB/s
INFO: status: 99% (212699250688/214748364800), sparse 58% (125302415360), duration 2751, 102/0 MB/s
INFO: status: 100% (214748364800/214748364800), sparse 59% (127351402496), duration 2770, 107/0 MB/s
INFO: transferred 214748 MB in 2770 seconds (77 MB/s)
INFO: archive file size: 61.03GB
INFO: Finished Backup of VM 200 (00:46:12)
INFO: Backup job finished successfully
TASK OK

Some more information:
Using PVE Enterprise - 3.4
Code:
# pveversion -V
proxmox-ve-2.6.32: 3.4-150 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-3 (running version: 3.4-3/2fc72fee)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-31-pve: 2.6.32-132
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.4-3
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-32
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Code:
# cat /etc/fstab
proc /proc proc defaults 0 0
...
/dev/vg0/root  /  ext3  defaults 0 0
/dev/vg0/swap  swap  swap  defaults 0 0
/dev/vg1/vz  /var/lib/vz  ext3  defaults 0 0
/dev/vg1/backup /pvebackup ext3 defaults 0 0

Code:
# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  20
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1,69 TiB
  PE Size               4,00 MiB
  Total PE              442338
  Alloc PE / Size       256000 / 1000,00 GiB
  Free  PE / Size       186338 / 727,88 GiB
  VG UUID               Sd2mdO-WduR-Ooaf-CBXS-yzjT-55PA-28QBtp
 
  --- Volume group ---
  VG Name               vg0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               132,87 GiB
  PE Size               4,00 MiB
  Total PE              34015
  Alloc PE / Size       33792 / 132,00 GiB
  Free  PE / Size       223 / 892,00 MiB
  VG UUID               0OkfY2-1wPb-tl7h-YKW7-i96w-vrZo-xfvgDZ

Code:
# lvdisplay
...
  --- Logical volume ---
  LV Path                /dev/vg1/backup
  LV Name                backup
  VG Name                vg1
  LV UUID                5QS8c1-Jqnd-T2ip-P9de-O4dH-EI6p-nqYM5H
  LV Write Access        read/write
  LV Creation host, time REMOVED, 2014-07-05 15:04:35 +0200
  LV Status              available
  # open                 1
  LV Size                300,00 GiB
  Current LE             76800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

...

Code:
# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,rootdir
        maxfiles 0
 
dir: backup
        path /pvebackup
        content backup
        maxfiles 4
 
lvm: images
        vgname vg1
        content images

Any ideas?
 
Last edited:
Hm... i am just stupid

I guess the old backup is deleted only on success? (Would be nice if someone can confirm)

If that is so, then my backup simply fails cause of not enuff space.

The backup will be ~ 60 GB

where my df -h shows
[...]
/dev/mapper/vg1-backup 296G 242G 39G 87% /pvebackup

Not enough space. Wasn't there a log entry for disk full or something like that?
 
Hm... i am just stupid

I guess the old backup is deleted only on success? (Would be nice if someone can confirm)

If that is so, then my backup simply fails cause of not enuff space.

The backup will be ~ 60 GB

where my df -h shows
[...]
/dev/mapper/vg1-backup 296G 242G 39G 87% /pvebackup

Not enough space. Wasn't there a log entry for disk full or something like that?
Automatic backups are only deleted on success.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!