restore container from snapshot

AxelTwin

Well-Known Member
Oct 10, 2017
133
6
58
39
Hi everybody,
I am using pve-zsync tool to backup my vm/container. Since ovh datacenter burnt during the night, all of my vm/containers are gone and I only have access to my backup server.
Snapshot have been made using :
pve-zsync create --source 10.2.2.42:105 --name imap-daily --maxsnap 7 --dest tank/pve-zsync/Daily
original dataset + snaphot of each vm/container is stored on that server.
is there a method quicker than
zfs send <pool>/[<path>/]vm-<VMID>-disk-<number>@<last_snapshot> | zfs receive <pool>/<path>/vm-<VMID>-disk-<number>
to get those vm/containers back up and running as I have a bunch of vm/containers to recover ?
Any chance to import vm/containers config file to /etc/pve/node...etc... and do a zfs rollback from the latest snapshot ?
Thanks for your help
 
Last edited:
do you want to start the VM on your zsync target server? then you just need to copy the config in place, make sure you can see the disks with pvesm and potentially adapt the storage paths in the config. if you want to recover on another server, then you first need to transfer the datasets (with zfs send) and config (e.g., with scp) to that server as described in https://pve.proxmox.com/wiki/PVE-zsync#Recovering_an_VM
 
  • Like
Reactions: AxelTwin
Thanks for your reply, I will start them on the backup server for emergency.
It looks like I cannot write in the config file location. is it because the cluster is broken ?
Should I destroy it ?

root@proxmox-3:/# cp /rpool/pve-config/proxmox-1/pve/qemu-server/212.conf /etc/pve/nodes/proxmox-3/qemu-server/ cp: cannot create regular file '/etc/pve/nodes/proxmox-3/qemu-server/212.conf': Permission denied root@proxmox-3:/# cp /rpool/pve-config/proxmox-1/pve/qemu-server/212.conf /etc/pve/qemu-server/ cp: cannot create regular file '/etc/pve/qemu-server/212.conf': Permission denied

Also, creating VM/container is not permited. Not sure how to stop quorum properly
 
Last edited:
pvecm status should give you an idea whether the cluster is quorate..
 
cluster is down for sure as the other servers are impacted by ovh outage.
I am not sure how to properly stop the cluster and gain acces to all features on the remaining server.
I dont want to destroy the cluster as I don't know yet if the others servers are definitely gone.

Code:
root@proxmox-3:/# pvecm status
Cluster information
-------------------
Name:             overlaps
Config Version:   25
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Mar 10 12:20:49 2021
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000003
Ring ID:          3.e312
Quorate:          No

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      1
Quorum:           2 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 xxx.xxx.xxx.xxx (local)

Is it safe to set expected vote to 1 and set it back to 3 when the others servers are back online ?
 
Last edited:
only if you can ensure that the other servers are down, and don't come back up automatically else you risk a split brain. if you can ensure that, you can then start one of the others (it will sync up with the already online cluster), and once those first two nodes are synced, boot the third (which will then sync up to the existing quorate cluster partition).

if the other two come online but can't reach the third node, both partitions will think they are the quorate majority and authorized to change things (the two-node partition because they are the majority, the third node because you told it only needs its own vote to be quorate).
 
Thanks for your help Fabian, I just learnt that 1 of the 3 nodes has burnt so I will go for the first solution.
 
Last edited:
Just to make sure before making mistakes as I have no backup of the backup:

On my backup server I have for example these snapshot of lxc 127:

Code:
root@proxmox-3:~# zfs list | grep 127
rpool/pve-zsync/subvol-127-disk-0  1.30T  3.32T     1.26T  /rpool/pve-zsync/subvol-127-disk-0

root@proxmox-3:~# zfs list -t snapshot | grep 127
rpool/pve-zsync/subvol-127-disk-0@rep_PveZsyncSnap_2021-01-25_19:00:01   804M      -     1.23T  -
rpool/pve-zsync/subvol-127-disk-0@rep_PveZsyncSnap_2021-01-26_19:01:04   512M      -     1.23T  -
rpool/pve-zsync/subvol-127-disk-0@rep_PveZsyncSnap_2021-01-27_19:01:21   520M      -     1.23T  -
...
more daily backups
...
rpool/pve-zsync/subvol-127-disk-0@rep_PveZsyncSnap_2021-03-05_19:01:01   662M      -     1.26T  -
rpool/pve-zsync/subvol-127-disk-0@rep_PveZsyncSnap_2021-03-08_19:00:01   731M      -     1.26T  -
rpool/pve-zsync/subvol-127-disk-0@rep_PveZsyncSnap_2021-03-09_19:02:50  1.03M      -     1.26T  -

If I import the config file 127.conf from dead server to this backup server and boot it, will the server start from the last backup state (2021-03-09_19:02:50) ?
 
yes, unless you modified the dataset on the target node after the last sync.
 
  • Like
Reactions: AxelTwin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!