vzdump mode failure - unable to detect lvm volume group

mstng_67

New Member
Feb 9, 2011
10
0
1
Greetings!

I am having a problem doing "live" snapshot-based backups using vzdump. When I attempt to do so, I receive the message listed in the title of this post. Subsequently, vzdump tries "suspend" mode instead. This causes unacceptable downtime.

Here's some further information:

Code:
proxmox3:/var/log/vzdump# lvdisplay
  --- Logical volume ---
  LV Name                /dev/pve/swap
  VG Name                pve
  LV UUID                TobDm3-wUQA-D0Hz-mgEm-YILs-U0Iw-xeChRK
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                31.00 GB
  Current LE             7936
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0

  --- Logical volume ---
  LV Name                /dev/pve/root
  VG Name                pve
  LV UUID                3cPwBq-70fF-OaW4-FTtu-zBkW-1vjK-zLCjH7
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                96.00 GB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

  --- Logical volume ---
  LV Name                /dev/pve/data
  VG Name                pve
  LV UUID                sI8ZR8-EOs3-HXXG-leJE-X4Hg-uIi0-sQuz2h
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                550.38 GB
  Current LE             140896
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:2
Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-root   95G  904M   89G   1% /
tmpfs                  16G     0   16G   0% /lib/init/rw
udev                   10M  588K  9.5M   6% /dev
tmpfs                  16G     0   16G   0% /dev/shm
/dev/mapper/pve-data  542G  265G  277G  49% /var/lib/vz
/dev/sda1             504M   31M  448M   7% /boot
192.168.255.194:/var/lib/vz/nfs_vm
                      542G  265G  277G  49% /mnt/pve/nfs_vm
192.168.255.129:/vm_backups
                      479G   92G  363G  21% /mnt/pve/backups_share
192.168.255.129:/vm_isos
                      479G   92G  363G  21% /mnt/pve/vm_isos
I have read several forum posts with folks that have the same problem. All of which mention some kind of parsing bug with VZDump.pm. However, I have been unsuccessful in correcting the issue using the instructions and patches found in those forums. You should know that I have restored VZDump.pm back to the original version that is installed in PM 1.7.

Does anyone have suggestions?
 
what do you get here:

Code:
pvdisplay
 
do you store your VM disks on a NFS share? in this case LVM snapshots are not possible.
 
tom,

Thanks for the response. I was afraid of that. Yes, all of the VMs are on an NFS share for use of the online migration feature. The idea is to reduce downtime. However, there seems to be an issue if, while in an attempt to reduce downtime through the online migration capabilities, one is not able to make use of the online backup (snapshot) capabilities. So the cost for increasing availability using one technology means decreasing availability using another. Are there any other options for getting both? Yeah, I know...I want the washing machine AND the dryer!

Thanks so much in advance.
 
tom,

ok, i've been trying to get this working with iscsi/network backing. i have followed the directions at http://pve.proxmox.com/wiki/Storage_Model. here is a quick overview of my setup.

PM4 host: Proxmox 1.7 and iscsi-scst (also my iscsi target) (Node)
PM3 host: Proxmox 1.7 (Master)
PM2 host: Proxmox 1.7 (Node)
PM1 host: Proxmox 1.7 (Node)

i have been able to successfully shrink pve-data to make room for another logical volume. i created a new lv and used it as my "raw" iscsi device. once that was done, i access the web interface on PM3 (my Master.) i followed the instructions from the url above. everything worked without a hitch. however, i have a new problem. creation of a virtual machine to the lvm hosted on the iscsi store allows only raw disk type. we use vmdk. also, i do not understand how to "move" vms to this new storage as it is not mounted according to the mount command. it also does not show up using df. how do i move my vmdk files to this new storage?

there is one other problem that has surfaced. accessing the web interface for PM4 and clicking on the storage link causes the web interface to timeout. it also causes the filesystem syncronization to show "no sync" for quite a while. any ideas?

thanks again in advance.
 
Last edited:
tom,

ok, i've been trying to get this working with iscsi/network backing. i have followed the directions at http://pve.proxmox.com/wiki/Storage_Model. here is a quick overview of my setup.

PM4 host: Proxmox 1.7 and iscsi-scst (also my iscsi target) (Node)
PM3 host: Proxmox 1.7 (Master)
PM2 host: Proxmox 1.7 (Node)
PM1 host: Proxmox 1.7 (Node)

i have been able to successfully shrink pve-data to make room for another logical volume. i created a new lv and used it as my "raw" iscsi device. once that was done, i access the web interface on PM3 (my Master.) i followed the instructions from the url above. everything worked without a hitch. however, i have a new problem. creation of a virtual machine to the lvm hosted on the iscsi store allows only raw disk type. we use vmdk.
if you use block devices (LVM) there is no virtual disk format as you use the block device without anything between - therefore this is also the fastest way. (we just displays 'raw', maybe a bit misleading).

also, i do not understand how to "move" vms to this new storage as it is not mounted according to the mount command. it also does not show up using df. how do i move my vmdk files to this new storage?

do a backup with vzdump and a restore with qmrestore using the --storage option (see man qm).

there is one other problem that has surfaced. accessing the web interface for PM4 and clicking on the storage link causes the web interface to timeout. it also causes the filesystem syncronization to show "no sync" for quite a while. any ideas?

thanks again in advance.

all defined storages must be accessible form all cluster nodes - if not you will get timeouts on the gui.
 
do a backup with vzdump and a restore with qmrestore using the --storage option (see man qm).
Here is the output when that is attempted:

Code:
proxmox3:~# qmrestore --unique --storage iscsi_vm /backups/vzdump-qemu-114-2011_04_07-13_49_31.tgz 135
INFO: restore QemuServer backup '/backups/vzdump-qemu-114-2011_04_07-13_49_31.tgz' using ID 135
INFO: extracting 'qemu-server.conf' from archive
INFO: extracting 'vm-disk-ide0.vmdk' from archive
INFO: unable to restore 'vm-disk-ide0.vmdk' to storage 'iscsi_vm'
INFO: storage type 'lvm' does not support format 'vmdk
INFO: tar: vm-disk-ide0.vmdk: Cannot write: Broken pipe
INFO: tar: 29767: Child returned status 255
INFO: tar: Error exit delayed from previous errors
INFO: starting cleanup
ERROR: restore QemuServer backup '/backups/vzdump-qemu-114-2011_04_07-13_49_31.tgz' failed - command 'tar xf '/backups/vzdump-qemu-114-2011_04_07-13_49_31.tgz' '--to-command=/usr/sbin/qmrestore --unique --storage iscsi_vm /backups/vzdump-qemu-114-2011_04_07-13_49_31.tgz --extract 135'' failed with exit code 2
Am I doing something wrong here?

Is there a "howto" for iscsi with network backing with proxmox other than the link you've already given me? I feel that I am in need of some additional reading material.

Is is possible to put an ext3 filesystem on the lvm and share out that directory by editing /etc/pve/storage.cfg directly?

All of your help is greatly appreciated!