Instant mount / start of vm from pbs backup

Lapointemar

Member
Oct 18, 2020
18
2
8
32
I can with sucess mount the backup from pbs with the qemu-nbd -s commande (ive found that on a other tread on this forum) then i can map de device with
Ln -s /dev/nbd0 rpool/data/vm-100-disk-1
After that i can restor the config file and edit the config to point to new drive

I have booted a windows vm with success but the question i have is "did i risk to break backup by mounting backup like that ?"

And how can we speed up that nbd read ? Because it work slow a bit (around 10 minute to boot windows domain controleur from backup)

It cool because we can instantly boot multiple terabyte vm to test backup, recover data and desaste recovery (ex: ransomware) and then i suppose migrate vm to original server whiout to many down time for customer.


Any idea all ?

Anyway very good product and it still beta i cannot wait stable product :)
 
Just quick dirt note if some on need complet procedure to mount drive from backup (It just some information ive found every where)

export PBS_REPOSITORY='username@pbs@x.x.x.x:datastoreName'
export PBS_PASSWORD='PasswordUsername'
export PBS_FINGERPRINT='Figerprint'
qemu-nbd --connect=/dev/nbd0 -f raw -s --cache=writeback pbs:repository=$PBS_REPOSITORY,snapshot=vm_snapshot,archive=drive-scsi0.img.fidx
mkdir /var/lib/vz/images/VMID
ln -s /dev/nbd0 /var/lib/vz/images/VMID/vm-VMID-disk-0.raw
_________________________
Copy configuration from backup and edit configuration to point du new drive
Start VM from cmd or web GUI
____________________________
Stop vm & delete Config file
Delete drive
rm /var/lib/vz/images/VMID/vm-VMID-disk-0.raw
qemu-nbd -d /dev/nbd0


I wish someone from dev team can let us know if they think this can be add as a option on PBS to start from backup VM :)
 
  • Like
Reactions: flames
Just for completeness I'll put it here too: Since proxmox-backup-client 0.9.1 you can do the mapping of backups easier with the 'map' and 'unmap' subcommands. Mapping directly to VMs will be discussed further on the bugtracker: https://bugzilla.proxmox.com/show_bug.cgi?id=3080
 
Thank stefan, i gona test this map/unmap commande then. the post is solved for me i let it open for sometime if someone need to add something

Cordialement,
 
This is entirely seperate from storage backends, the data is directly streamed from PBS... not sure where Ceph would come into play here.
 
I was having a bit of a brainfart, the disk format of course is the same for whatever storage backend was used in the first place once backed up with pbs.

What packages are needed for this to work btw? qemu-utils for the qemu-nbd, but what package provides the pbs format for qemu-nbd?
 
I was having a bit of a brainfart, the disk format of course is the same for whatever storage backend was used in the first place once backed up with pbs.
The disk format for PBS backups is always the raw data, i.e. you get the data exactly as it was previously seen from within the guest (e.g. if you backup a VM on qcow2, what gets backed up is the data within the qcow2 image, not including any qcow2 metadata).

What packages are needed for this to work btw? qemu-utils for the qemu-nbd, but what package provides the pbs format for qemu-nbd?
PBS support is not upstream (and most likely never will be), so you need the PVE version of QEMU. You can get it from the PVE APT repository, just install the pve-qemu-kvm package (that should be possible without installing any other PVE component, so you can install this on other systems as well).
 
  • Like
Reactions: Cookiefamily
This is nice would be great to add something on the road map I m not sure what lol
To make it so that the PBS can be used for Disaster recover in an easy way
I think Veam and Acronis have this kind of feature where you can run up the backup ( possibly read only ) as a remote site
Sorry I m sort of thinking aloud here and I m sure you guys maybe already have a future plan
 
  • Like
Reactions: Lapointemar
After many test ive finaly write a small script around the proxmox-backup-client and qemu-nbd who start a vm from backup. But regarding performance other than add a cache and special device do someone have advice to boost read iops over backup (zfs datastore)

For the record
Ive passe from around 13-16 min to staet dc-exchange serveur (2016) to 2-5 minute with special device and with cache device that bost long run / disk migration post started vm
 
just install the pve-qemu-kvm package (that should be possible without installing any other PVE component, so you can install this on other systems as well).
Adding the PVE repository also wants to replace these packages:
Code:
dmeventd/stable 2:1.02.155-pve4 amd64 [upgradable from: 2:1.02.155-3]
dmsetup/stable 2:1.02.155-pve4 amd64 [upgradable from: 2:1.02.155-3]
libdevmapper-event1.02.1/stable 2:1.02.155-pve4 amd64 [upgradable from: 2:1.02.155-3]
libdevmapper1.02.1/stable 2:1.02.155-pve4 amd64 [upgradable from: 2:1.02.155-3]
liblvm2cmd2.03/stable 2.03.02-pve4 amd64 [upgradable from: 2.03.02-3]
lvm2/stable 2.03.02-pve4 amd64 [upgradable from: 2.03.02-3]

And when trying to install this:
Code:
~# apt install pve-qemu-kvm
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  linux-image-4.19.0-5-amd64
Use 'apt autoremove' to remove it.
The following additional packages will be installed:
  ceph-common dbus-user-session dconf-gsettings-backend dconf-service fontconfig fontconfig-config
  fonts-dejavu-core glib-networking glib-networking-common glib-networking-services
  gsettings-desktop-schemas gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good
  gstreamer1.0-x i965-va-driver ibverbs-providers intel-media-va-driver libaa1 libaacs0 libacl1-dev
  libaom0 libasound2 libasound2-data libass9 libasyncns0 libattr1-dev libavc1394-0 libavcodec58
  libavfilter7 libavformat58 libavutil56 libbabeltrace1 libbdplus0 libbluray2 libboost-atomic1.67.0
  libboost-iostreams1.67.0 libboost-program-options1.67.0 libboost-regex1.67.0 libboost-system1.67.0
  libboost-thread1.67.0 libbs2b0 libc-dev-bin libc6-dev libcaca0 libcairo-gobject2 libcairo2
  libcdparanoia0 libcephfs2 libchromaprint1 libcodec2-0.8.1 libcroco3 libcrystalhd3 libdatrie1
  libdconf1 libdrm-amdgpu1 libdrm-common libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libdrm2 libdv4
  libdw1 libfftw3-double3 libflac8 libflite1 libfontconfig1 libfribidi0 libgdk-pixbuf2.0-0
  libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgfapi0 libgfchangelog0 libgfdb0 libgfrpc0 libgfxdr0
  libgl1 libgl1-mesa-dri libglapi-mesa libglusterfs-dev libglusterfs0 libglvnd0 libglx-mesa0 libglx0
  libgme0 libgomp1 libgoogle-perftools4 libgraphite2-3 libgsm1 libgstreamer-plugins-base1.0-0
  libgstreamer1.0-0 libgudev-1.0-0 libharfbuzz0b libibverbs1 libice6 libiec61883-0 libigdgmm5 libiscsi7
  libjack-jackd2-0 libjbig0 libjemalloc2 libjpeg62-turbo liblilv-0-0 libllvm7 libmp3lame0 libmpg123-0
  libmysofa0 libnl-3-200 libnl-route-3-200 libnorm1 libnspr4 libnss3 libnuma1 libogg0 libopenjp2-7
  libopenmpt0 libopus0 liborc-0.4-0 libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0 libpciaccess0
  libpgm-5.2-0 libpixman-1-0 libpostproc55 libproxmox-backup-qemu0 libproxy1v5 libpulse0 libpython2.7
  librados2 libradosstriper1 libraw1394-11 librbd1 librdmacm1 librsvg2-2 librsvg2-common librubberband2
  libsamplerate0 libsdl1.2debian libsensors-config libsensors5 libserd-0-0 libshine3 libshout3 libsm6
  libsnappy1v5 libsndfile1 libsodium23 libsord-0-0 libsoup2.4-1 libsoxr0 libspeex1 libspice-server1
  libsratom-0-0 libssh-gcrypt-4 libswresample3 libswscale5 libtag1v5 libtag1v5-vanilla
  libtcmalloc-minimal4 libthai-data libthai0 libtheora0 libtiff5 libtirpc-common libtirpc3 libtwolame0
  libunwind8 libusbredirparser1 libv4l-0 libv4lconvert0 libva-drm2 libva-x11-2 libva2 libvdpau-va-gl1
  libvdpau1 libvidstab1.1 libvisual-0.4-0 libvorbis0a libvorbisenc2 libvorbisfile3 libvpx5 libwavpack1
  libwebp6 libwebpmux3 libx11-xcb1 libx264-155 libx265-165 libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0
  libxcb-present0 libxcb-render0 libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxdamage1 libxfixes3 libxi6
  libxrender1 libxshmfence1 libxtst6 libxv1 libxvidcore4 libxxf86vm1 libzmq5 libzvbi-common libzvbi0
  linux-libc-dev manpages-dev mesa-va-drivers mesa-vdpau-drivers numactl python-asn1crypto
  python-cephfs python-certifi python-cffi-backend python-chardet python-cryptography python-enum34
  python-idna python-ipaddress python-openssl python-pkg-resources python-prettytable python-rados
  python-rbd python-requests python-six python-urllib3 python3-prettytable va-driver-all
  vdpau-driver-all x11-common
Suggested packages:
  ceph ceph-mds gvfs i965-va-driver-shaders libasound2-plugins alsa-utils libbluray-bdj glibc-doc
  firmware-crystalhd libdv-bin oss-compat libfftw3-bin libfftw3-dev libvisual-0.4-plugins
  gstreamer1.0-tools jackd2 opus-tools pulseaudio libraw1394-doc librsvg2-bin lm-sensors serdi sordi
  speex gstreamer1.0-plugins-ugly python-cryptography-doc python-cryptography-vectors python-enum34-doc
  python-openssl-doc python-openssl-dbg python-setuptools python-socks python-ntlm nvidia-vdpau-driver
  nvidia-legacy-340xx-vdpau-driver nvidia-legacy-304xx-vdpau-driver
The following packages will be REMOVED:
  qemu-utils
The following NEW packages will be installed:
  ceph-common dbus-user-session dconf-gsettings-backend dconf-service fontconfig fontconfig-config
  fonts-dejavu-core glib-networking glib-networking-common glib-networking-services
  gsettings-desktop-schemas gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good
  gstreamer1.0-x i965-va-driver ibverbs-providers intel-media-va-driver libaa1 libaacs0 libacl1-dev
  libaom0 libasound2 libasound2-data libass9 libasyncns0 libattr1-dev libavc1394-0 libavcodec58
  libavfilter7 libavformat58 libavutil56 libbabeltrace1 libbdplus0 libbluray2 libboost-atomic1.67.0
  libboost-iostreams1.67.0 libboost-program-options1.67.0 libboost-regex1.67.0 libboost-system1.67.0
  libboost-thread1.67.0 libbs2b0 libc-dev-bin libc6-dev libcaca0 libcairo-gobject2 libcairo2
  libcdparanoia0 libcephfs2 libchromaprint1 libcodec2-0.8.1 libcroco3 libcrystalhd3 libdatrie1
  libdconf1 libdrm-amdgpu1 libdrm-common libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libdrm2 libdv4
  libdw1 libfftw3-double3 libflac8 libflite1 libfontconfig1 libfribidi0 libgdk-pixbuf2.0-0
  libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgfapi0 libgfchangelog0 libgfdb0 libgfrpc0 libgfxdr0
  libgl1 libgl1-mesa-dri libglapi-mesa libglusterfs-dev libglusterfs0 libglvnd0 libglx-mesa0 libglx0
  libgme0 libgomp1 libgoogle-perftools4 libgraphite2-3 libgsm1 libgstreamer-plugins-base1.0-0
  libgstreamer1.0-0 libgudev-1.0-0 libharfbuzz0b libibverbs1 libice6 libiec61883-0 libigdgmm5 libiscsi7
  libjack-jackd2-0 libjbig0 libjemalloc2 libjpeg62-turbo liblilv-0-0 libllvm7 libmp3lame0 libmpg123-0
  libmysofa0 libnl-3-200 libnl-route-3-200 libnorm1 libnspr4 libnss3 libnuma1 libogg0 libopenjp2-7
  libopenmpt0 libopus0 liborc-0.4-0 libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0 libpciaccess0
  libpgm-5.2-0 libpixman-1-0 libpostproc55 libproxmox-backup-qemu0 libproxy1v5 libpulse0 libpython2.7
  librados2 libradosstriper1 libraw1394-11 librbd1 librdmacm1 librsvg2-2 librsvg2-common librubberband2
  libsamplerate0 libsdl1.2debian libsensors-config libsensors5 libserd-0-0 libshine3 libshout3 libsm6
  libsnappy1v5 libsndfile1 libsodium23 libsord-0-0 libsoup2.4-1 libsoxr0 libspeex1 libspice-server1
  libsratom-0-0 libssh-gcrypt-4 libswresample3 libswscale5 libtag1v5 libtag1v5-vanilla
  libtcmalloc-minimal4 libthai-data libthai0 libtheora0 libtiff5 libtirpc-common libtirpc3 libtwolame0
  libunwind8 libusbredirparser1 libv4l-0 libv4lconvert0 libva-drm2 libva-x11-2 libva2 libvdpau-va-gl1
  libvdpau1 libvidstab1.1 libvisual-0.4-0 libvorbis0a libvorbisenc2 libvorbisfile3 libvpx5 libwavpack1
  libwebp6 libwebpmux3 libx11-xcb1 libx264-155 libx265-165 libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0
  libxcb-present0 libxcb-render0 libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxdamage1 libxfixes3 libxi6
  libxrender1 libxshmfence1 libxtst6 libxv1 libxvidcore4 libxxf86vm1 libzmq5 libzvbi-common libzvbi0
  linux-libc-dev manpages-dev mesa-va-drivers mesa-vdpau-drivers numactl pve-qemu-kvm python-asn1crypto
  python-cephfs python-certifi python-cffi-backend python-chardet python-cryptography python-enum34
  python-idna python-ipaddress python-openssl python-pkg-resources python-prettytable python-rados
  python-rbd python-requests python-six python-urllib3 python3-prettytable va-driver-all
  vdpau-driver-all x11-common
0 upgraded, 235 newly installed, 1 to remove and 6 not upgraded.
Need to get 152 MB of archives.
After this operation, 835 MB of additional disk space will be used.

Would it be possible for you to release the nbd driver as a standalone package that can also be included in the pbs repo?
 
@Lapointemar would you be so kind as to share your script ? I'm starting to explore PBS as a potential component for a failover scenario
Yes but you need to take in consideration this is wipe script and it need some love to shine, i gona remove sensible information and attach my draf there. Its a python script are you good with this language ? Did you need some comment cause now it did not have any comment on it
 
Adding the PVE repository also wants to replace these packages:
You don't have to if you don't want. They're not necessary for QEMU.
And when trying to install this:
Yes, those are all the libraries we link against as well as some tools. I get the same output (bar the x11-common for some reason) on a fresh debian buster. Your old qemu-utils will be uninstalled and replaced by the PVE patched version.
Would it be possible for you to release the nbd driver as a standalone package that can also be included in the pbs repo?
We don't have a statically linked version of our QEMU package. So while yes, we could put it up there, it wouldn't change anything about the dependencies that need installing for it to work.

The "NBD driver" itself is a modification to QEMU source code, not a loadable library, so we can't publish just that, it has to come packaged with the full QEMU. You can of course always build yourself a more minimal QEMU binary with our patches applied, git is always open :)
 
@JamesT Hi JamesT
This is the working python script.
First my environement, PBS is install on PVE server who is dedicated to backup, I mount my backup on it self but shoul work on any host who have pve install and can reach backup serveur

1:Script use SSH to push commande to local or distant serveur where you whan start backup this was the easyest way for me to get it working .
2:Dont use it on production environement, its a draff i give you to help you play around PBS and intant restore and it dont have any security you can think a backup tool can have (ex: if you type a restore_ID of a existing vm it override any existant vm and delete it during script shutdown.)
3:I know its poor quality script but you know whats is work for me, feels free to update it and give back any upgrade to this script :)

I recommand to install (from memory) it on venv
"apt-get install -y python3-venv")
make a folder for your project "mkdir -p projcetfoldervenv/project"
move to project folder "cd projcetfoldervenv"
create venv "python3 -m venv "
loade venv "source bin/activate"
nano project/main.py << put script there and change variable with _ on it for your setting.
EX:""HOST_USERNAME@pam@BACKUP_HOST:BACKUP_DATASTORE""
move to exec directory "cd project"
make exec file "chmod +x main.py"
lunch script "python main.py"

and for the moment it only require paramiko lib ("pip install paramiko" in venv)
whis this can help you a bit just take care when using it to not wipe production VM/Datastore

Edit:_______
Ho and when you instant restor LVM VM you need to lvchange -a n nbd_LVM after recovery but before end script.
If you dont the /dev/nbdx is lock and the only way ive found to unlock is to reboot the backup server this happe d only to lvm drive at least from my experience.

Also for vm with many disk sometime you need to change boot order in proxmox gui because /dev/nbd0 map vm-xxx-disk-1 at place of disk 0
Edit2 :
I miss to say you need to execute this commande "modprobe nbd" first time before starting script (each time host restart
 

Attachments

  • Script.txt
    6.1 KB · Views: 27
Last edited:
  • Like
Reactions: JamesT
Thank you for the script, though I have not gotten this to work. Still running some tests.


Though I'm wondering if something like this would be implemented as built in feature in the client?
Sort of like the "veeam instant recovery"
I think this would be a "killer" feature that would bring even more users to the proxmox platform.
 
Thank you for the script, though I have not gotten this to work. Still running some tests.


Though I'm wondering if something like this would be implemented as built in feature in the client?
Sort of like the "veeam instant recovery"
I think this would be a "killer" feature that would bring even more users to the proxmox platform.
Have not used it from last december maby some api call have change (i know there a lot of change api client xall), you can try to execute one by one commande line and they probably one not working (the commande send by ssh)

Look for something with proxmox-backup-client as commande and you need to initiate variable for datastore before

cmd_base = "export PBS_REPOSITORY='HOST_USERNAME@pam@BACKUP_HOST:BACKUP_DATASTORE';export PBS_PASSWORD='PASSWORD_HOST';export PBS_FINGERPRINT='BACKUP_HOST_FIGERPRINT';"

If i get somme time i gona try it and resend a update version is probably only a syntax change on command line :) glad of you interest !
 
I have gotten some further but something weird is happening somewhere, have tried doing the commands etc manually to rule out things.

Have basically found hta I need to maybe create my own custom scripting for our use case, but the main thing I'm stuck on is this:


Code:
qemu-nbd --connect=/dev/nbd0 -f raw -s --cache=writeback pbs:repository=root@pam@mybackupserver:mydatastore,snapshot=vm/210/2020-10-31T23:01:23Z,archive=drive-virtio0.img.fidx
qemu-nbd: Failed to set NBD socket
qemu-nbd: Disconnect client, due to: Failed to read request: Unexpected end-of-file before all bytes were read

Getting this error when using the qemu-nbd command, i've tried finding some documentation for this command, specifiacally relating to the pbs:repository "Module" or wahtever to call it.

But I have not found anything usefull, @Lapointemar do you or any others any hints here, either to the error or where i can find some docs.?
 
Just chiming in to say that if your goal is to access a black-device backup locally, you can now simply use the 'proxmox-backup-client map' command. See the man page for more details, this will map a block device as /dev/loopN for you. We're working on single file restore and mapping backups into VMs, on the devel list there is a also a patch series to restore a VM while it is already booting from the backup image itself.
 
  • Like
Reactions: Lapointemar

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!