Okay, now I am totally confused. I plugged a 160GB USB HD into the openfiler, created an iscsi volume on that and exported it. On the proxmox 1.7 HN, I did the same exact steps as in the "proxmox storage model" howto. I then created a 32GB virtio drive on the lvm/iscsi VG and then did:
vzdump --dumpdir /mnt/pve/backup -snapshot 102
INFO: starting new backup job: vzdump --dumpdir /mnt/pve/backup -snapshot 102
INFO: Starting Backup of VM 102 (qemu)
INFO: running
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: Logical volume "vzsnap-proxmox-0" created
INFO: Logical volume "vzsnap-proxmox-0" created
INFO: resume vm
INFO: vm is online again after 2 seconds
INFO: creating archive '/mnt/pve/backup/vzdump-qemu-102-2011_02_08-11_09_22.tar'
INFO: adding '/mnt/pve/backup/vzdump-qemu-102-2011_02_08-11_09_22.tmp/qemu-server.conf' to archive ('qemu-server.conf')
INFO: adding '/mnt/vzsnap0/images/102/vm-102-disk-2.qcow2' to archive ('vm-disk-virtio0.qcow2')
INFO: adding '/dev/kvm-storage2/vzsnap-proxmox-0' to archive ('vm-disk-virtio1.raw')
INFO: Total bytes written: 48875728896 (13.15 MiB/s)
INFO: archive file size: 45.52GB
INFO: Logical volume "vzsnap-proxmox-0" successfully removed
INFO: Logical volume "vzsnap-proxmox-0" successfully removed
INFO: Finished Backup of VM 102 (00:59:10)
INFO: Backup job finished successfuly
Here is what resulted (as before, save was done to an NFS share on the same openfiler):
-rw-rw-rw-+ 1 root 96 46G Feb 8 12:08 vzdump-qemu-102-2011_02_08-11_09_22.tar
Yes, 46GB. Doing a 'tar tvf' on the tarball yields:
-rw-r--r-- root/root 34359738368 2011-02-08 11:09 vm-disk-virtio1.raw
Looking at the config, the file in question is here:
virtio1: kvm-storage2:vm-102-disk-1
so:
proxmox:/etc/qemu-server# ls -lh /dev/mapper/kvm--storage2-vm--102--disk--1
brw-rw---- 1 root 6 254, 3 Feb 8 11:07 /dev/mapper/kvm--storage2-vm--102--disk-1
e.g. it is a device, not an actual file. I assume this is why this storage backing mode only allows raw as opposed to qcow2 or whatever - unlike the local storage backing, which has an actual pathname (including the raw vs qcow2 suffix), this type is backed by a "device". We've got to be doing something different here, no? I just had a thought: I dumped out several records from the device (/dev/vdb) that the guest sees. Random data. If that is the case, I think the mystery is solved (for my end anyway) - e.g. vzdump doesn't know or care about filesystems, it is going by block-level "is this block zero" check, which of course will be false for almost any block on the physical device, no? Unless one goes and writes zeroes over the entire physical device on the iscsi target (the openfiler appliance)? What I don't understand is why you didn't see this