Question about backing up KVM's with iSCSI storage?

dswartz

Renowned Member
Dec 13, 2010
286
9
83
I have a KVM with a SCSI local disk, and an LVM data disk with iSCSI backing on a NAS. proxmox only allowed me to create the latter as a raw disk image. If I try using vzdump to back up the KVM, it seems to try to copy the entire volume, not just the part actually used by the (sparse) raw disk image. I can't 100% prove that, but I don't know what else could explain why checking several hours later found the backup file on the NFS share (mounted to the host node) was 72GB whereas the rawfile was listed as only using 6GB or so. I aborted the backup. This happens consistently. Not sure what else to do other than some kludge like this:

shutdown the VM
remove the iSCSI backed disk
run vzdump in the proxmox shell
add the disk back

is there a better way?
 
in the the first run vzdump (snapshot mode) must read the whole block device to see where the data is located. in the second run it only saves the data to the tar file.

is the problem with the local disk or with the LVM volume (on iSCSI)? and also post the output of 'pveversion -v'
 
i'm not sure what you mean by 'the local disk or the lvm volume'. the KVM has a disk whose backing store is an LVM which is using the iscsi on the openfiler appliance. when vzdump is trying to back that up, the file on the NFS share that proxmox uses gets bigger and bigger. i interrupted it after it was running all night and had grown to over 70GB (and the actual data is like 7GB). i can't reproduce this now, since i gave up and switched to an NFS share (e.g. this made it effectively impossible to back up the KVM). here is the output you asked for:

pve-manager: 1.7-11 (pve-manager/1.7/5470)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.7-30
pve-kernel-2.6.32-4-pve: 2.6.32-30
qemu-server: 1.1-28
pve-firmware: 1.0-10
libpve-storage-perl: 1.0-16
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-10
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.13.0-3
ksm-control-daemon: 1.0-4
 
also post the output of your VM (see /etc/qemu-server/VMID.conf)

how big is your virtual disk in total?
 
Both virtual disks were 32GB. Here is the conf file, but remember the iscsi backed vm file is no longer present...

ostype: l26
memory: 2048
sockets: 1
name: sphinx.TLD
bootdisk: scsi0
vlan0: virtio=16:E4:FC:66:C7:54
virtio0: local:102/vm-102-disk-2.qcow2
onboot: 1
cores: 1
 
ok, now you totally confuses this thread. you talk about 2 disks: a raw image and a LVM on iSCSI.

now you post a VM config with one local qcow2 disk image. and you have a bootdisk specified which is non-existing in this VM - makes no sense at all and this VM will not start.

if I should debug this issue you would need to set it up again - otherwise I see no way to say more on this.
 
Sigh. Let me try again. I originally had the qcow2 image, which was the root disk for this ubuntu VM. I also had a raw image, which was backed by the LVM/iscsi on the openfiler NAS. It worked fine. However, when I tried to backup the VM with vzdump, while the qcow2 disk was backed up just fine, the attempt to back up the data disk (the LVM/iscsi one) wouldn't work because the image on the backup share kept getting bigger and bigger. As I said before, I no longer have this image because I went another route (e.g. because this was not working for me.) I doubt this was a unique problem to me - I am betting if you set up a VM with a raw disk backed by an LVM/iscsi, you will see the same issue. If you don't wish to pursue it, that is your choice, of course (while this is not affecting me at the moment, I wouldn't like to see a serious issue like this left for someone else to trip over.)
 
Sigh. Let me try again. I originally had the qcow2 image, which was the root disk for this ubuntu VM. I also had a raw image, which was backed by the LVM/iscsi on the openfiler NAS.

You should not store a raw image on a LVM/iSCSI. A raw image can only stored on a filesystem, similar to qcow2. if you use LVM/iSCSI you should use directly the block device. So is this just a misunderstanding or did you really put a file system on the LVM/iSCSI volume (which makes no sense) and then you store raw image files?

It worked fine. However, when I tried to backup the VM with vzdump, while the qcow2 disk was backed up just fine, the attempt to back up the data disk (the LVM/iscsi one) wouldn't work because the image on the backup share kept getting bigger and bigger. As I said before, I no longer have this image because I went another route (e.g. because this was not working for me.) I doubt this was a unique problem to me - I am betting if you set up a VM with a raw disk backed by an LVM/iscsi, you will see the same issue. If you don't wish to pursue it, that is your choice, of course (while this is not affecting me at the moment, I wouldn't like to see a serious issue like this left for someone else to trip over.)

in any case, if you cannot provide the VM config in question its quite impossible to see what you really did and what can cause the issue. and yes, we do vzdump backup of raw images with more than 100 GB disk, so this works reliable here and should also work on other places.
 
I think we may have some confusion here (I'm sure I may not have been clear.) I did exactly what this document says to do:

http://pve.proxmox.com/wiki/Storage_Model

after creating the LVM in the GUI, the drop-down menu says what you will use it for and I said "virtual disks". I then created a raw disk image (only allowable type in that context) and gave that to the VM.
 
Just built exactly the same here in our labs (with openfiler 2.3).

KVM guest with one local disk (OS boot) and one LVM on iSCSI 32gb (4 GB used). Backup to a NFS server, backup file is 15 GB uncompressed, the expected size.
 
I assume the 15GB is 11 for the local disk and 4GB for the iscsi backed disk? If so, that's weird. I'm going to see if I can repro it again here (will take some doing, since I no longer have space available on the openfiler...)
 
Okay, now I am totally confused. I plugged a 160GB USB HD into the openfiler, created an iscsi volume on that and exported it. On the proxmox 1.7 HN, I did the same exact steps as in the "proxmox storage model" howto. I then created a 32GB virtio drive on the lvm/iscsi VG and then did:

vzdump --dumpdir /mnt/pve/backup -snapshot 102
INFO: starting new backup job: vzdump --dumpdir /mnt/pve/backup -snapshot 102
INFO: Starting Backup of VM 102 (qemu)
INFO: running
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: Logical volume "vzsnap-proxmox-0" created
INFO: Logical volume "vzsnap-proxmox-0" created
INFO: resume vm
INFO: vm is online again after 2 seconds
INFO: creating archive '/mnt/pve/backup/vzdump-qemu-102-2011_02_08-11_09_22.tar'
INFO: adding '/mnt/pve/backup/vzdump-qemu-102-2011_02_08-11_09_22.tmp/qemu-server.conf' to archive ('qemu-server.conf')
INFO: adding '/mnt/vzsnap0/images/102/vm-102-disk-2.qcow2' to archive ('vm-disk-virtio0.qcow2')
INFO: adding '/dev/kvm-storage2/vzsnap-proxmox-0' to archive ('vm-disk-virtio1.raw')

INFO: Total bytes written: 48875728896 (13.15 MiB/s)
INFO: archive file size: 45.52GB
INFO: Logical volume "vzsnap-proxmox-0" successfully removed
INFO: Logical volume "vzsnap-proxmox-0" successfully removed
INFO: Finished Backup of VM 102 (00:59:10)
INFO: Backup job finished successfuly

Here is what resulted (as before, save was done to an NFS share on the same openfiler):

-rw-rw-rw-+ 1 root 96 46G Feb 8 12:08 vzdump-qemu-102-2011_02_08-11_09_22.tar

Yes, 46GB. Doing a 'tar tvf' on the tarball yields:

-rw-r--r-- root/root 34359738368 2011-02-08 11:09 vm-disk-virtio1.raw

Looking at the config, the file in question is here:

virtio1: kvm-storage2:vm-102-disk-1

so:

proxmox:/etc/qemu-server# ls -lh /dev/mapper/kvm--storage2-vm--102--disk--1
brw-rw---- 1 root 6 254, 3 Feb 8 11:07 /dev/mapper/kvm--storage2-vm--102--disk-1

e.g. it is a device, not an actual file. I assume this is why this storage backing mode only allows raw as opposed to qcow2 or whatever - unlike the local storage backing, which has an actual pathname (including the raw vs qcow2 suffix), this type is backed by a "device". We've got to be doing something different here, no? I just had a thought: I dumped out several records from the device (/dev/vdb) that the guest sees. Random data. If that is the case, I think the mystery is solved (for my end anyway) - e.g. vzdump doesn't know or care about filesystems, it is going by block-level "is this block zero" check, which of course will be false for almost any block on the physical device, no? Unless one goes and writes zeroes over the entire physical device on the iscsi target (the openfiler appliance)? What I don't understand is why you didn't see this :(
 
Same problem here:

pveversion -v

pve-manager: 1.7-11 (pve-manager/1.7/5470)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.7-30
pve-kernel-2.6.32-4-pve: 2.6.32-30
qemu-server: 1.1-28
pve-firmware: 1.0-10
libpve-storage-perl: 1.0-16
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-10
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.13.0-3
ksm-control-daemon: 1.0-4


/etc/qemu-server/101.conf

name: Windows2003R2
bootdisk: virtio0
ostype: w2k3
memory: 1024
onboot: 1
sockets: 1
cores: 4
boot: c
freeze: 0
cpuunits: 1000
acpi: 1
kvm: 1
vlan0: virtio=
virtio0: vol1:vm-101-disk-1,cache=none


The VM is using a 100GB RAW hard disk.
The hard disk resides on a 1 TB LVM over iSCSI.

Resolved:
Proxmox was installed on a 80 GB SATA hard disk and the VM hard disk needed 100 GB.
Reinstaled using a 640 GB SATA disk and the backups started to work.
 
You should not store a raw image on a LVM/iSCSI. A raw image can only stored on a filesystem, similar to qcow2. if you use LVM/iSCSI you should use directly the block device. So is this just a misunderstanding or did you really put a file system on the LVM/iSCSI volume (which makes no sense) and then you store raw image files?

I don't understand why he can't use a RAW image for a VM on a shared LVM storage over iSCSI.
I am doing it that way and everything seems ok.
Please, explain me.
 
I am doing exactly what the proxmox wiki says to do - if that makes no sense, the docs should be fixed :(
 
Well, as I said, by the time I posted the config, I had removed the problematic device. As far as "and now you talk about lvm", I was clear (I thought) from the very beginning that I had a qcow2 main disk and an iscsi backed lvm disk (again, exactly as described in the wiki entry).
 
Guys, I'm a little confused (and frustrated). I feel like I was extremely clear about what I was doing, but got scolded for doing something unsupported and silly (despite the fact that I was going straight off of the proxmox storage model wiki entry.) If that concept (storing virtual disks on an iscsi backed lvm) is not a good idea, I'd appreciate an acknowledgment that the wiki is wrong, rather than just scolding me as an idiot :(
 
no-one called someone an idiot. but if you just write I configure the system like described in the wiki is just not specific enough. It is much easier if you ask clear questions - best is one question per thread - use simple sentences and words as there are a lot of non-native speaker and its never a good idea to tell us that we produce wrong documentation or bet against the proofed functionality or our software. All people here tries to help you in their free time, no one is paid for helping you.

if you find an error in the documentation, fix it. Its just a wiki.

if you find a bug in the software, describe it how to reproduce.
 
I assume most of the difficulty here has been language-related. I do appreciate the proxmox people trying to help. The scenario I used is from here:

http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing

I did exactly as it says. Create iscsi target on the NAS. Create the iscsi target using that in the Storage menu in proxmox. Add the LVM group in the storage menu in proxmox, using the iscsi target just created. We now have some storage. Create a disk in a KVM, and for the backing storage, specify the LVM we just created. It will only allow a rawfile (for reasons I now understand.) Even if the newly created disk is completely pristine, backing up the KVM will use a huge amount of storage for the lvm/iscsi disk, due to the underlying physical disk having random data in it. Does this make more sense?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!