Transferring VM using Backup via CLI. Looking for Assistance.

Windows7ge

Active Member
Sep 12, 2019
35
1
28
29
Quick background information. I want to setup some UNIX based virtualized tools at my work. I asked for permission and got it under the condition that it not be connected to the company network/internet. This is fine.

What I'm looking to do is install PROXMOX on a small spare PC in the office. To setup a UNIX VM will require Internet so I have an idea but could use some help filling in the blanks.

At my home I have PROXMOX servers and a CIFS/SMB backup share. I can backup VM's and access those backup files via SMB. So I can create the VM at home. Make a backup, and put the backup on a thumb-drive or something else.

Here's where I need help. When I bring the backup file to the office how can I access it when I plug it into the server? If I can fit the VM backup file on a thumb drive how can I temporarily mount the drive? Will it mount itself? Where?

Additionally I found this in the PROXMOX manual:
Code:
USAGE: qmrestore <archive> <vmid> [OPTIONS]
  <archive>  <string>

             The backup file. You can pass '-' to read from standard input.

  <vmid>     <integer> (1 - N)

             The (unique) ID of the VM.

  -bwlimit   <number> (0 - N)

             Override i/o bandwidth limit (in KiB/s).

  -force     <boolean>

             Allow to overwrite existing VM.

  -live-restore <boolean>

             Start the VM immediately from the backup and restore in
             background. PBS only.

  -pool      <string>

             Add the VM to the specified pool.

  -storage   <string>

             Default storage.

  -unique    <boolean>

             Assign a unique random ethernet address.

Using this will it be as simple as:
Code:
qmrestore name-of-backup-file.vma.zst 100
or does it need to be
Code:
qmrestore archive name-of-backup-file.vma.zst vmid 100
or
Code:
qmrestore -archive name-of-backup-file.vma.zst -vmid 100
etc, etc...

Just some gaps in my understanding if anybody can help fill them in. Thanks. :D
 
USAGE: qmrestore <archive> <vmid> [OPTIONS] <archive> <string> The backup file. You can pass '-' to read from standard input.
backup file is the file name, not a keyword or option, so
qmrestore name-of-backup-file.vma.zst 100
This ^ would be the right approach.

I am surprised that your company is skittish on you having access to internet from this box but is ok with bringing images/VMs of unknown provenance/cleanliness.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
backup file is the file name, not a keyword or option, so

This ^ would be the right approach.

I am surprised that your company is skittish on you having access to internet from this box but is ok with bringing images/VMs of unknown provenance/cleanliness.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Maybe you know the answer to my new problem. PROXMOX will not let me import the VMs to the new server:
Code:
root@pve:~# qmrestore vzdump-qemu-117-2022_10_22-16_26_51.vma.zst 100 --storage rpool
restore vma archive: zstd -q -d -c /root/vzdump-qemu-117-2022_10_22-16_26_51.vma.zst | vma extract -v -r /var/tmp/vzdumptmp68843.fifo - /var/tmp/vzdumptmp68843
CFG: size: 632 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 34359738368 devname: drive-scsi0
CTIME: Sat Oct 22 16:26:52 2022
error before or during data restore, some or all disks were not completely restored. VM 100 state is NOT cleaned up.
command 'set -o pipefail && zstd -q -d -c /root/vzdump-qemu-117-2022_10_22-16_26_51.vma.zst | vma extract -v -r /var/tmp/vzdumptmp68843.fifo - /var/tmp/vzdumptmp68843' failed: storage 'rpool' does not exist
I had it on a pool named "flash" on the home server but the work server I'm just using rpool but no matter what I try "--storage", "--pool" it keeps saying rpool doesn't exist. I don't understand why and Google's not immediately being helpful.
 
What does "pvesm status" show?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Code:
root@pve:~# pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active      1885862016         9028864      1876833152    0.48%
local-zfs     zfspool     active      1876833272              96      1876833176    0.00%
Oh, does the --storage or --pool arguments not accept the friendly name? Googling around I did see "local-zfs" mentioned in some posts but the main zfs pool here comes up as "rpool" with "zpool status".
 
You are using a PVE command "qmrestore" that interacts with PVE API and acts on PVE configuration objects. The name of your storage, as far as qmrestore is concerned, is "local-zfs". Its the job of the storage sub-interface internal to PVE to translate it to actual zfs infrastructure.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Windows7ge
You are using a PVE command "qmrestore" that interacts with PVE API and acts on PVE configuration objects. The name of your storage, as far as qmrestore is concerned, is "local-zfs". Its the job of the storage sub-interface internal to PVE to translate it to actual zfs infrastructure.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
This just looks to have worked for one of two VM's. The other is having a hissyfit about...something...
Code:
progress 32% (read 43980685312 bytes, duration 93 sec)
_22-16_40_10.vma.zst : Decoding error (36) : Corrupted block detected
vma: restore failed - short vma extent (3167888 < 3801600)
/bin/bash: line 1: 235544 Exit 1                  zstd -q -d -c /root/vzdump-qemu-118-2022_10_22-16_40_10.vma.zst
     235545 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp235542.fifo - /var/tmp/vzdumptmp235542
temporary volume 'local-zfs:vm-101-disk-2' sucessfuly removed
temporary volume 'local-zfs:vm-101-disk-3' sucessfuly removed
temporary volume 'local-zfs:vm-101-disk-0' sucessfuly removed
temporary volume 'local-zfs:vm-101-disk-1' sucessfuly removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 101 state is NOT cleaned up.
command 'set -o pipefail && zstd -q -d -c /root/vzdump-qemu-118-2022_10_22-16_40_10.vma.zst | vma extract -v -r /var/tmp/vzdumptmp235542.fifo - /var/tmp/vzdumptmp235542' failed: exit code 133
I have a suspicion of what the problem is but I'm running out of time today to work on this. The exported file from my server was around 11.5GiB but moved to the server it's showing up as ~6.5GiB which tells me the entirely of the backup file didn't copy over.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!