ZFS and Suspend does NOT work - I understood it did?

it seems you are confusing quite a bunch of topics here.. what exactly do you want to do?
 
Live snapshots (machine disk state and memory state) can be done on ZFS as well as a backup in snapshot mode on a KVM VM.
 
I'm using the UI defined backup and found the KVM hosts are unresponsive during a backup run... I tested this a few days ago and thought it worked, but on my limited IOPS test set up things were a little dicey...

I've installed PMX4 on a node that was 3 before (new install, not upgrade) and the hosts (with their images in a ZFS pool with plenty of IOPS) definitely become unresponsive during the backup....

I'm not compressing and the backup is (presumably) a sequential write to a dedicated drive that has no other traffic.

Am I missing something?
 
This is an excerpt from the log...

INFO: starting new backup job: vzdump --quiet 1 --mailnotification always --compress 0 --all 1 --mode snapshot --storage mnt_backup_daily
INFO: Starting Backup of VM 102 (qemu)
INFO: status = running
INFO: update VM 102: -lock backup
INFO: VM Name: XXXXXXXXXXX
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/backup/daily/dump/vzdump-qemu-102-2016_09_22-12_35_02.vma'
INFO: started backup task 'dd7b025d-9dc3-4ec1-b2b3-d1088df9204f'

and the disk is definitely on a ZFS volume...

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pmx01zfs 82.5G 1.24T 96K /pmx01zfs
pmx01zfs/vm-102-disk-1 51.6G 1.27T 21.5G -

And during the backup...no ZFS snapshot....

# zfs list -t snapshot
no datasets available

Do I need to consider writing something to use zfs send or similar ot get a live / snapshot backup?
 
Ah - ok, this is beginning to make sense now... I ticked the "Qemu Agent" box and rebooted (power cycled) 2 VPSs and started the back up again. I can now console in to the VPS, but the mouse doesn't track and still seem hung (but I do get a console).

Am I again missing something? Does the GuestOS now matter (it's 2008r2 in my first test)?
 
OK, so it seems that whilst in theory this works, in practice it doesn't. The Guest OS need non trivial modification (qemu-ga) so there is no usable migration path to v4.... is there any way to get a ZFS snapshot backup working (I don't mind having to script it myself, but I'm sure I'm not the only one stuck with what is now an upgraded proxmox with a reduced feature set (; )
 
I don´t get the question, you mix up several topics.

Do we talk about snapshots, backup or the qemu-guest-agent?

Qemu Backup does not use ZFS snapshots.
 
Well not really, they're all related... in the previous (as I used it) vzdump used LVM snapshots to get a consistent backup without causing lock on the guest OS. No special tool was needed on the guest OS.

It seem there is no equivalent for this in proxmox 4 on ZFS?

vzdump doesn't use ZFS snapshots, but DOES require a special guest OS tool. I read an understand the logic, but whilst it's sound going forward it doesn't help where we're at now.

I'm not sure I can be clearer... there is no way to use
* vzdump to get
* a consistent copy of the guest
* without locking
 
Well not really, they're all related... in the previous (as I used it) vzdump used LVM snapshots to get a consistent backup without causing lock on the guest OS. No special tool was needed on the guest OS.

yes. this was replaced long time ago before 4.x with qemu backup, eliminating the need of ANY special storage. the guest agent is NOT needed for this, but its an additional improvement.

It seem there is no equivalent for this in proxmox 4 on ZFS?

vzdump doesn't use ZFS snapshots, but DOES require a special guest OS tool. I read an understand the logic, but whilst it's sound going forward it doesn't help where we're at now.

I'm not sure I can be clearer... there is no way to use
* vzdump to get
* a consistent copy of the guest
* without locking

our qemu backup does NOT need a special guest OS tool. thats why it is so cool. totally flexible.

the LVM snapshot ONLY worked with local LVM, e.g. never with other storage types (like NFS).
 
Hmm, well I'd like to beleive you, but my guests are locking... is there some debug information I can provide? It's a new install (today) following the howto for installing above Debian...

# pveversion -v
proxmox-ve: 4.2-64 (running kernel: 4.4.16-1-pve)
pve-manager: 4.2-18 (running version: 4.2-18/158720b9)
pve-kernel-4.4.16-1-pve: 4.4.16-64
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-44
qemu-server: 4.0-86
pve-firmware: 1.1-9
libpve-common-perl: 4.0-72
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-57
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-2
pve-container: 1.0-73
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
cgmanager: not correctly installed
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
 
pls describe your hardware in detail and your zfs config.

where is your backup target?
 
# zpool status
pool: pmx01zfs
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
pmx01zfs ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc1 ONLINE 0 0 0
sde1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdd1 ONLINE 0 0 0
sdf1 ONLINE 0 0 0


# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pmx01zfs 87.2G 1.23T 96K /pmx01zfs
pmx01zfs/vm-102-disk-1 51.6G 1.26T 21.5G -
pmx01zfs/vm-114-disk-1 25.8G 1.23T 25.1G -
pmx01zfs/vm-114-disk-2 5.16G 1.24T 84.5M -
pmx01zfs/vm-117-disk-1 4.64G 1.23T 4.64G -


The backup -storage is an ext4 partition on a locally attached disk. I wondering now whether the target should also be ZFS?

more details?
 
Cpu?
Ram?
Harddisks?

and check the zfs disk performance via "pveperf /path"
 
This same hardware was perfectly good with 10X the load only a few hrs ago. It's not CPU / RAM / Harddisks when the VPS Guest locks up during a backup and that is the ONLY activity happening...

This is the source:
# pveperf /pmx01zfs
CPU BOGOMIPS: 34136.04
REGEX/SECOND: 893209
HD SIZE: 1261.31 GB (pmx01zfs)
FSYNCS/SECOND: 112.03
DNS EXT: 10.37 ms

This is the target (iSCSI) and this is taken whilst it's being read from
# pveperf /mnt/backup/daily/
CPU BOGOMIPS: 34136.04
REGEX/SECOND: 905674
HD SIZE: 1833.66 GB (/dev/sdj1)
BUFFERED READS: 62.94 MB/sec
AVERAGE SEEK TIME: 35.78 ms
FSYNCS/SECOND: 179.16
DNS EXT: 10.16 ms
 
This same hardware was perfectly good with 10X the load only a few hrs ago. It's not CPU / RAM / Harddisks when the VPS Guest locks up during a backup and that is the ONLY activity happening...

This is the source:
# pveperf /pmx01zfs
CPU BOGOMIPS: 34136.04
REGEX/SECOND: 893209
HD SIZE: 1261.31 GB (pmx01zfs)
FSYNCS/SECOND: 112.03
DNS EXT: 10.37 ms

This is the target (iSCSI) and this is taken whilst it's being read from
# pveperf /mnt/backup/daily/
CPU BOGOMIPS: 34136.04
REGEX/SECOND: 905674
HD SIZE: 1833.66 GB (/dev/sdj1)
BUFFERED READS: 62.94 MB/sec
AVERAGE SEEK TIME: 35.78 ms
FSYNCS/SECOND: 179.16
DNS EXT: 10.16 ms

thanks. please answer all questions, regarding your hardware.
 
The host has dual quad core procs > 2Ghz and you can see the drive info above... there is 48GB RAM.

Perhaps it would be more helpful to step through this. With the links you provided so far I have to repeat that the evidence (and my experience) is against what you say. I hope you're right, but if you are there is some information missing.

Disregard the UI. Lets agree a simple case CLI that we can use to backup a single guest and let me know what test is approproate to check if it locks?

Should we start by agreeing that vzdump is the command and consider the parameters?
 
If anyone else hits this issue below is my current work around, if it transpires that I am right I'll document and post a link. It also shows (as proof of concept) that the issue is not resource constraint and shows clearly what I'm trying to acheive...

For each host in "qm list"
  1. zfs snapshot pmx01zfs/vm-XXX-disk-X@backup
  2. zfs clone pmx01zfs/vm-XXX-disk-X@backup pmx01zfs/backup
  3. dd if=/dev/zvol/pmx01zfs/backup of=<whereever>
  4. zfs destroy pmx01zfs/backup
  5. zfs destroy pmx01zfs/vm-XXX-disk-X@backup
This can be restored to this (or any other host or loop back mounted) since you have a valid coherent block device and during the run the guests OS does not lock, this is more-or-less the same functionality there used to be above LVM.

Obvioulsy a copy of each vm's conf file will also be necessary...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!