vzdump lxc zfs temp drive

flexyz

Well-Known Member
Sep 22, 2016
154
9
58
54
Hi

I am using vzdump to backup one of my LXC machines (300gb), but it seems that the temp drive is located on "rpool/ROOT/pve-1" and disk is running out of disk space before backup completes. Destination is a NAS drive and not a local disk.

Can I change the temp drive to use another location so my backup jobs don't fail?

Thanks
 
Yes. You can set the tmpdir option, either on the command line, or in /etc/vzdump.conf

See 'man vzdump' for details.
 
Thanks

I changed "tmpdir: /saspool/shares/temp" and it does create a small config file - but no big temp files

When doing backup, some data is stored on /rpool/?? and is increasing while backup runs. And fails when out of space :(

"rpool/ROOT/pve-1 28.2G 144G 28.2G /"


Any clues?

Thanks
Felix
 
Last edited:
It just needs space to save the snapshot. The only way to avoid that is to disable snapshot backup.
 
ZFS feature? I don't understand - ZFS snapshots are instant and don't take any space?

Why do you think a snapshot takes no space - obviously this is not really true (as soon as you write data, ZFS needs to store both the old and new data.)
 
Yes of course, but to me knowledge snapshots delta's are not stored on the rpool (boot) drive

Felix

maybe to prevent confusion you could post the following information:
  • pveversion -v
  • storage configuration ("/etc/pve/storage.cfg")
  • container configuration ("pct config ID")
  • vzdump log of a backup run
 
Yes sure and thanks :)

'proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-1 (running version: 4.4-1/eb2d6f1e)
pve-kernel-4.4.35-1-pve: 4.4.35-76
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-88
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80

*****

dir: local
path /var/lib/vz
content backup,iso,vztmpl

zfspool: local-zfs
pool rpool/data
sparse 1
content rootdir,images

nfs: QNAP
path /mnt/pve/QNAP
export /mother
server 192.168.183.108
options vers=3
maxfiles 1
content iso,backup,images
nodes mother

zfspool: sas_vms
pool saspool/vms
sparse 0
content images

zfspool: sas_lxc
pool saspool/lxc
sparse 0
content rootdir

zfspool: ssd_vms
pool ssdpool/vms
sparse 0
content images

zfspool: ssd_lxc
pool ssdpool/lxc
sparse 0
content rootdir

dir: BUFFALO
path /mnt/pve/BUFFALO
shared 0
content backup,iso
maxfiles 1

******

arch: amd64
cores: 12
hostname: svend
memory: 32768
mp0: ssd_lxc:subvol-105-disk-2,mp=/mnt/repos/svn/game3,backup=1,size=40G
mp1: ssd_lxc:subvol-105-disk-3,mp=/mnt/repos/svn/game2,backup=1,size=350G
mp2: ssd_lxc:subvol-105-disk-4,mp=/mnt/repos/svn/game2_audio_production,backup=1,size=15G
mp3: ssd_lxc:subvol-105-disk-5,mp=/mnt/repos/svn/tools,backup=1,size=10G
mp4: ssd_lxc:subvol-105-disk-6,mp=/mnt/repos/svn/game3_audio_production/,backup=1,size=6G
net0: name=eth0,bridge=vmbr183,gw=192.168.183.1,hwaddr=A2:62:86:AC:5F:00,ip=192.168.183.105/24,type=veth
ostype: ubuntu
rootfs: ssd_lxc:subvol-105-disk-1,size=40G
searchdomain: 192.168.183.113
swap: 4096

*****

Task viewer: Backup
Output
Status
Stop
INFO: starting new backup job: vzdump 105 --compress gzip --node mother --remove 0 --mode snapshot --storage BUFFALO
INFO: filesystem type on dumpdir is 'zfs' -using /var/tmp/vzdumptmp114108 for temporary files
INFO: Starting Backup of VM 105 (lxc)
INFO: status = running
INFO: CT Name: svend
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
INFO: resume vm
INFO: vm is online again after 0 seconds
INFO: creating archive '/mnt/pve/BUFFALO/dump/vzdump-lxc-105-2017_04_10-01_48_12.tar.gz'
INFO: gzip: stdout: No space left on device
INFO: remove vzdump snapshot
ERROR: Backup of VM 105 failed - command 'set -o pipefail && tar cpf - --totals --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/var/tmp/vzdumptmp114108' ./etc/vzdump/pct.conf '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ ./mnt/repos/svn/game3 ./mnt/repos/svn/game2 ./mnt/repos/svn/game2_audio_production ./mnt/repos/svn/tools ./mnt/repos/svn/game3_audio_production/ | gzip >/mnt/pve/BUFFALO/dump/vzdump-lxc-105-2017_04_10-01_48_12.tar.dat' failed: exit code 1
INFO: Backup job finished with errors
TASK ERROR: job errors
 
your destination (which is running out of space) is a directory storage - are you sure that this directory storage is correctly mounted? otherwise the backup will end up on whichever file system /mnt/pve/buffallo is on, which is probably / (i.e., your rpool/ROOT/pve-1)
 
ahhh that could be it :) - what is the best way to mount and auto-mount a CIFS share on a NAS?

Thanks
Felix

how you mount it is up to you (fstab, systemd, ..) but if you configure a mount point as directory storage, you should always set the "is_mountpoint" option to 1 (see "man pvesm"). also make sure that the directory is empty before attempting to mount the CIFS share again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!