dataset "rpool/ROOT/pve-1" wiederherstellen: wie?

arnaud056

Renowned Member
Jan 3, 2016
37
7
73
Hallo Zusammen,

wegen eines Fehlers von mir, habe ich die Proxmox kaputt gemacht. :(
Da ich basieren auf diesem Post https://forum.proxmox.com/threads/best-practice-for-proxmox-self-backup.38382/#post-189673 snapshots von den datasets der VMs sowie vom "rpool/ROOT/pve-1" mache, dachte ich, dass ich den Server schnell wiederherstellen konnte.
Das Wiederherstellen der Datasets der VMs ist kein Problem (siehe unten: VM114 recovered)
Problematisch ist die Proxmox selbst, weil der Dataset "rpool/ROOT/pve-1" ist auf "/" mounted.

Die Randbedingungen des Servers:
- 2x SSD zfs RAID1, à 240GB. Keine weitere Festplatte.
Gerade frisch neu installiert.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 11.7G 203G 104K /rpool
rpool/ROOT 1.22G 203G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.22G 203G 1.22G /
rpool/data 1.99G 203G 96K /rpool/data
rpool/data/vm-114-disk-0 1.99G 203G 1.99G -
rpool/swap 8.50G 212G 56K -


Verfügbare Snaptshots auf dem NAS:
# zfs list -t snap | grep 'backup-complet'
pool/rpool@backup-complet
pool/rpool/ROOT@backup-complet
pool/rpool/ROOT/pve-1@backup-complet
pool/rpool/data@backup-complet
pool/rpool/data/vm-111-disk-0@backup-complet
pool/rpool/data/vm-113-disk-0@backup-complet
pool/rpool/data/vm-114-disk-0@backup-complet
pool/rpool/swap@backup-complet


zfs ist etwas neu für mich und ich bin damit noch nicht so tief vertraut => bitte sag mir, ob die folgende
von mir angedachte klassische "Strategie" auf dem richtigen Weg ist:
- die Maschine mit einem Livesystem, das mit ZFS kompatible ist (welches? SystemRescueCD scheint laut Dokumentation es nicht zu können), starten
- den "rpool" importieren
- ssh-Verbindung mir dem NAS herstellen
- vom NAS aus: "zfs send pool/rpool/ROOT/pve-1@backup-complet | ssh root@livesystem zfs recv -R rpool/ROOT/pve-1

=> ist der o.g. Weg richtig?
Wenn nein, was wäre der richtige weg?

Danke vorab & Gruß
Arnaud
 
Hallo Achim,
danke für den Link: der ist Goldwert! Ich denke, dass ich damit klar kommen werde und melde mich nochmal sobald alles erledigt ist.

Ja (klar!), es ist meine produktive Proxmox...
Zu Glück habe ich die VMs gesichert und snapshots gemacht (Gürtel + Hosenträger), sobald ich bemerkt habe, dass es schief gehen könnte und solange die Maschine noch lief.
Ich habe dann die VMs auf die Test-Proxmox geladen und diese für die Übergangszeit produktiv eingesetzt.

Eigentlich wollte ich es mit der Test-Maschine üben.......

Gruß Arnaud
 
Hallo Achim,
dank deinem Link hat alles prima geklappt, pve-1 ist wieder da, Proxmox läuft, nichts ist verloren gegangen usw...:)

Hier meine Notiz zu der Prozedur (auf englich, weil aus meinem wiki rauskopiert und weil es könnte für auch nicht-Deutschsprechenden nützlich sein :

Configuration:
Proxmox5, running only 2 mirrored SSDs 240GB each (pool zfs RAID1).
In this case Proxox runs on dataset rpool/ROOT/pve-1 and the VMs are on datasets rpool/data/VM#######.
Restoring the Proxmox is restoring “pve-1” and restoring the VMs.

Preparing tasks:
  • stop the VMs
  • if you can, make a snapshot of the entire pool “rpool” to get the current state of the OS and of the VMs and e.g. sending the snapshot to a FreeNAS if the necessary storage capacity is not available with a USB disk. From the Proxmox:
    # zfs snapshot -r rpool@complete
    # zfs send -Rpv rpool@complete | ssh root@FreeNAS.domain.tld zfs recv -vF pool/backup/Proxmox
Restore:

Starting point:
  • A fresh installed new Proxmox as zfs RAID1
  • The Proxmox installation usb stick available.
The Proxmox OS and the VMs can be restored independently from each other.
As the restoring is made from a usb device containing the snapshots, I think that the easier way is to restore only the OS in a first time and the VMs in a second time after the system is running again. In this case a simple usb stick is sufficient.

Step 1:
Getting the snapshot of rpool/ROOT/pve-1 on the usb stick:
  • plug the stick into the FreeNAS, create a “restore” pool and send the snapshot on it:
  • root@FreeNAS $ zfs send -pv pool/backup/Proxmox/rpool/ROOT/pve-1@complete | zfs recv restore/pve-1
Step 2
  • Plug the restore USB stick
  • Then start a normal installation of Proxmox with the installation stick.
  • When the screen about the condition of use is displayed, press Ctrl-Alt-F1 to swith into the shell an press Ctrl-C to stop the installer.
    From this state, there is enough OS in live modus to manage zfs. This trick is magic, isn't it???
    icon_cool.gif
Important: the keyboard has the US layout!
  • zpool import
    shows the pool recreated during the new install and the pool for restore from the USB stick.

  • zpool import -f rpool

    $ zfs list

    shows the datasets created during the fresh installation of Proxmox, present on the RAID1 pool.
Step 3:
  • The next step will be get dataset “rpool/ROOT/pve-1” and mountpoint “/” available for the data to be restored:
    $ zfs rename rpool/ROOT/pve-1 rpool/ROOT/pve-2
    $ zfs get mountpoint rpool/ROOT/pve-2
    NAME PROPERTY VALUE SOURCE
    rpool/ROOT/pve-2 mountpoint / local
    ### this confirms that rpool/ROOT/pve-2 is mounted on "/"
    $ zfs set mountpoint=/rpool/ROOT/pve-2 rpool/ROOT/pve-2 ### or the mountpoint you want
    $ zfs get mountpoint rpool/ROOT/pve-2
    NAME PROPERTY VALUE SOURCE
    rpool/ROOT/pve-2 mountpoint /rpool/ROOT/pve-2 local
    ### => OK
  • import the pool “restore”.
    $ zpool import restore
  • Have a look to the datasets and cehck that the snapshot for restoration is present:
    $ zfs list
    $ zfs list -t snap
  • Now we copy the data from the “restore” pool into a new created rpool/ROOT/pve-1 and set its mountpoint on “/”
    $ zfs send -pv restore/rpool/ROOT/pve-1@complete | zfs recv -dvF rpool/ROOT


    The transfer of data should be visible.

  • When this is over:
    $ zfs set mountpoint=/ rpool/ROOT/pve-1 ##### It is possible that "/" is already mounted because Proxmox have already done the mounting automatically.
    $ zfs get mountpoint rpool/ROOT/pve-1 ## will confirm
  • Remove the restore stick:
    $ zpool export restore
  • Have a look and reboot:
    $ zfs list
    $ exit
Step 4:
I had some minor issues at the reboot:
  • device (= the “old” dataset for pve-1) no found but the boot process didn't stop here. In case of problems, use the function “boot rescue” of the USB-stick for installation
  • zfs: the first boot stops because the import of dataset “pve-1” must be forced by hand (“-f”) the fist time because of having been mounted on another system (= the previous used temporary OS for restoring).
  • nfs: nfs was not working and there were some error messages during the boot. Another reboot solved it.
    icon_cool.gif
After the OS runs:
# update-grub2
and reboot to solve the error messages at boot up.

Restoring the VMs
Restore the disks of the VMs:
From the FreeNAS:

# zfs send -pv pool/backup/Proxmox/rpool/data/vm-100-disk-0@complete | ssh root@proxmox.domain.tld zfs recv rpool/data/vm-100-disk-0
and so on…

Gruß Arnaud
 
  • Like
Reactions: jack187

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!