Restore from backup

nick

Renowned Member
Mar 28, 2007
364
1
83
Hi All,

The hard disk from one slave in my PVE cluster crash now! So, the data can't be recover.

I have backups for some machines on master (I use rsync for backup files); I can restore them on master? How I can restore a qemu machine if I recover qcow2 file and conf file?

Thank you!
 
Hi All,

The hard disk from one slave in my PVE cluster crash now! So, the data can't be recover.

I have backups for some machines on master (I use rsync for backup files); I can restore them on master? How I can restore a qemu machine if I recover qcow2 file and conf file?

Thank you!

you do not use vzdump for backup? you should change this.

manual recover:

  • create a new KVM VM
  • adapt the /etc/qemu-server/VMID.conf with the values from the backup
  • move the disk image file to the right location
 
I have 2 situations: 3 machines have backup files created with vzdump - the backup utility - and another 2 without backup but I'll try to recover the files (qcaw and conf) from the crashed disk!

for this 3 who have backup files created last Sunday I can restore them on Master or the other slave?
 
Hi All,

I try to recover some data by mounting the disk into another linux machine (OpenSuse) but I can't mount LVD partition

Someone know how I can mount the disk to try reading the data from him.

The disk is recognized as SCSI...
/dev/md126
/dev/md127

please...I need some help here

-----------

I try now another distribution but I can mount the 512MB .ext3 partition but for the second partition marked with LVD flag I don't know what filesystem is...
All the time when I try to mount the partition the system ask me to specify the filesystem...

What I need to do here?
 
Last edited:
Hi All,

I try to recover some data by mounting the disk into another linux machine (OpenSuse) but I can't mount LVD partition

Someone know how I can mount the disk to try reading the data from him.

The disk is recognized as SCSI...
/dev/md126
/dev/md127

please...I need some help here

-----------

I try now another distribution but I can mount the 512MB .ext3 partition but for the second partition marked with LVD flag I don't know what filesystem is...
All the time when I try to mount the partition the system ask me to specify the filesystem...

What I need to do here?

you access a HDD with Proxmox VE installed on another Linux?
if yes, you need LVM2 installed to access the data.
 
I see now...

I'll try to install PVE on the new hard disk and after that I will mount the old broked hard disk and see what I can recover.

Anyway, I try now to restore some machines from backup but I don't understant why is not working; I try to restore machine 109 into a new machine with no. 120. I give de command:

vzdump --restore vzdump-109.tgz 120

and the sistem retun
Code:
Unknown option: restore
usage: /usr/sbin/vzdump OPTIONS [--all | VMID]

        --exclude VMID          exclude VMID (assumes --all)
        --exclude-path REGEX    exclude certain files/directories
        --stdexcludes           exclude temorary files and logs

        --compress              compress dump file (gzip)
        --dumpdir DIR           store resulting files in DIR
        --maxfiles N            maximal number of backup files per VM
        --script FILENAME       execute hook script
        --storage STORAGE_ID    store resulting files to STORAGE_ID (PVE only)
        --tmpdir DIR            store temporary files in DIR

        --mailto EMAIL          send notification mail to EMAIL.
        --quiet                 be quiet.
        --stop                  stop/start VM if running
        --suspend               suspend/resume VM when running
        --snapshot              use LVM snapshot when running
        --size MB               LVM snapshot size

        --node CID              only run on pve cluster node CID
        --lockwait MINUTES      maximal time to wait for the global lock
        --stopwait MINUTES      maximal time to wait until a VM is stopped
        --bwlimit KBPS          limit I/O bandwidth; KBytes per second

what I do wrong?
 
damn...I see a lot of changes...the corect command is "qmrestore" because is a qemu machine!

Anyway, I access the backup menu option and I see some errors! The backup jobs defined for a slave dissapear (see attachment) and if I try co create a new backup job I receive an error :

Eroare: no backup storage defined - please create a backup storage first
 

Attachments

  • backup1.JPG
    backup1.JPG
    45.4 KB · Views: 30
Anyway, I access the backup menu option and I see some errors! The backup jobs defined for a slave dissapear (see attachment)

Please can you post the content of '/etc/cron.d/vzdump'

and if I try co create a new backup job I receive an error :

Eroare: no backup storage defined - please create a backup storage first

Isn't that a good hint? Go to the Storage menu and create storage for backups (content 'VZDump backups').
 
Please can you post the content of

'/etc/cron.d/vzdump'

Also from there is dissapear...strange! I don't remember to delete this backup jobs! I recreate them...

Isn't that a good hint? Go to the Storage menu and create storage for backups (content 'VZDump backups').

I have an extra disk mounted on Master in /backup. How I create local storage on Slaves? I use Rsync script to move them on master backup disk after that...
or I leave option

Code:
root vzdump --quiet --node 3 --snapshot --compress --dumpdir /backup/SLAVE2/PROXY

and not to --storage option (instead --dumpdir).

I want to keep a backup first on local server and after that on master extra disk!
 
you access a HDD with Proxmox VE installed on another Linux?
if yes, you need LVM2 installed to access the data.

I put a new Hard Disk inside my server, and I reinstall PVE1.4. Now, the server is connected in my cluster and everything it's ok.
My next step is to try recover data from old disk. So, I connect the disk into a USB rack and connect to server. I list the disks with fdisk -l and I found them on:

Code:
Disk /dev/sdb: 164.6 GB, 164696555520 bytes
255 heads, 63 sectors/track, 20023 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xdfc337a7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          66      524288   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2              66       20023   160310427   8e  Linux LVM

now, I make 2 directors:

mkdir /mnt/sdb1 and mkdir /mnt/sdb2
and I try to mount the partitions. For sdb1 works, but fro sdb 2 I receive the message:

Code:
pve_slave_2:~# mount /dev/sdb2 /mnt/sdb2/
mount: unknown filesystem type 'lvm2pv'

What I do wrong? I'm now connected to the same server to a PVE 1.4 system...
 
Code:
pve_slave_2:~# mount /dev/sdb2 /mnt/sdb2/
mount: unknown filesystem type 'lvm2pv'

What I do wrong? I'm now connected to the same server to a PVE 1.4 system...

I suggest that you first read the LVM howto - you need some basic knowledge about lvm before you can do such things. /dev/sdb2 does not contain a filsystem - instead it is a LVM PV. See

http://tldp.org/HOWTO/LVM-HOWTO/

The problem is that you now have a name conflict - the new installation uses VG 'pve' and the old one also - that will not work.
 
OK...I solve and recover all machines! I have luck...and the broken disk allow me to recover all data. This is what I do:

1. I connect the broken disk to the re-installed PVE 1.4 and I found the name of LVM group - this is "pve"
2. I take the disk and I install him into a USB external rack and download a Fedora live CD.
3. After the Fedora live CD boot, open terminal and logon to root account with command "sh"
4. This is the key point: open command
Code:
[B]vgchange -a y pve[/B]
the answer need to be:
Code:
 3 logical volume(s) in volume group "pve" now active
we are on good way.
5. now, we list the volumes:

Code:
[root@localhost liveuser]# [B]lvdisplay[/B]
 --- Logical volume ---
 LV Name                /dev/pve/swap
 VG Name                pve
 LV UUID                oPtebu-DzQG-ac51-jc9E-0TIC-CAYn-j4yVKN
 LV Write Access        read/write
 LV Status              available
 # open                 0
 LV Size                4.00 GB
 Current LE             1024
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:2

 --- Logical volume ---
 LV Name                /dev/pve/root
 VG Name                pve
 LV UUID                hVC33d-nNEM-dgGL-u5zw-1sPQ-MDV6-f1wQHi
 LV Write Access        read/write
 LV Status              available
 # open                 0
 LV Size                38.25 GB
 Current LE             9792
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:3

 --- Logical volume ---
 LV Name                /dev/pve/data
 VG Name                pve
 LV UUID                3A1ZqC-PZ9b-iALZ-aRRw-azv3-5l2A-RGCzBb
 LV Write Access        read/write
 LV Status              available
 # open                 0
 LV Size                106.64 GB
 Current LE             27299
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:4

6. now, make 2 directory where to mount the volumes:

Code:
mkdir /mnt/data
mkdir /mnt/root

7. mount the volumes:

Code:
mount /dev/pve/data /mnt/data/
mount /dev/pve/root /mnt/root/

8. Now, browse in this folders and extract all the necessary info to another disk or shared folder.

9. After this, go into the new installed PVE and put the config's into /etc/qemu-server/ and qcow2 files into /var/lib/qemu-server/vz/images (I hope I remember correct the way :D)

after that give to this files permision 644 (chmod 644 filename.qcow2 or cfg)

Enter into PVE WebUI and... voila... all machines are ready for action!

PS: I think this can be an article about how to recover data if something happen to the disk (not boot or some damage)!
 
Last edited: