restoring VM error

Blisk

Member
Apr 16, 2022
40
1
13
I am trying now to restore VM because migrating didn't work and it didn't because of the same reason.
I really don't know what to do to transfer VM server to another proxmox server.
I can not have the same disks on both servers just to migrate. So what to do to solve this problem with disk which is not the same name on second server?

root@pve:~# qmrestore /mnt/pve/mybackup/@Recently-Snapshot/GMT+01_2024-03-21_0100/dump/vzdump-qemu-100-2024_03_20-14_24_22.vma.zst 108
restore vma archive: zstd -q -d -c /mnt/pve/mybackup/@Recently-Snapshot/GMT+01_2024-03-21_0100/dump/vzdump-qemu-100-2024_03_20-14_24_22.vma.zst | vma extract -v -r /var/tmp/vzdumptmp493410.fifo - /var/tmp/vzdumptmp493410
CFG: size: 488 name: qemu-server.conf
DEV: dev_id=1 size: 128849018880 devname: drive-scsi0
DEV: dev_id=2 size: 2147483648000 devname: drive-scsi1
CTIME: Wed Mar 20 14:24:23 2024
no lock found trying to remove 'create' lock
error before or during data restore, some or all disks were not completely restored. VM 108 state is NOT cleaned up.
command 'set -o pipefail && zstd -q -d -c /mnt/pve/mybackup/@Recently-Snapshot/GMT+01_2024-03-21_0100/dump/vzdump-qemu-100-2024_03_20-14_24_22.vma.zst | vma extract -v -r /var/tmp/vzdumptmp493410.fifo - /var/tmp/vzdumptmp493410' failed: no such volume group 'disk2tba'
1711039932190.png
 
Hi,
it looks this storage is only available on one of the nodes. You can select a different storage when restoring.

If you have spare disks, you can also create a corresponding LVM storage on the other node, i.e. create a volume group disk2tba. Otherwise, you should tell Proxmox VE, that the storage is not actually available on both nodes, by going to Datacenter > Storage > disk2tba > Edit and restrict the Nodes option.
 
  • Like
Reactions: Blisk
Hi,
it looks this storage is only available on one of the nodes. You can select a different storage when restoring.

If you have spare disks, you can also create a corresponding LVM storage on the other node, i.e. create a volume group disk2tba. Otherwise, you should tell Proxmox VE, that the storage is not actually available on both nodes, by going to Datacenter > Storage > disk2tba > Edit and restrict the Nodes option.
thank you for your answer. Actually I did tried all but the result was the same.
I edit disk2tba and select to all for all nodes. Also I secelt first node to limit only on first node and try to add disk disk2tba on second node but it refuse as it already exist. On second node I have nothing yet on it because I want to restore all 3 VMs to second node. Everytime I try to restore VM on second node I get the same error.
this is limited to first node, second node is PVE. Than I changed to all.
1711110794201.png
1711110731143.png1711110681157.png
 
thank you for your answer. Actually I did tried all but the result was the same.
I edit disk2tba and select to all for all nodes. Also I secelt first node to limit only on first node and try to add disk disk2tba on second node but it refuse as it already exist. On second node I have nothing yet on it because I want to restore all 3 VMs to second node. Everytime I try to restore VM on second node I get the same error.
You can select the storage you want to restore to.

You would need to create the volume group on the node it doesn't exist yet. Otherwise, restrict the storage to the node it's actually available on, to not accidentally select it and run into such issues.

You probably should uncheck Shared, except if the LVM is actually already shared across nodes (but it's not available on one of the nodes, so that seems unlikely). The flag does not automagically make a local storage shared, it's just to tell Proxmox VE about already shared ones.
 
  • Like
Reactions: Blisk
You can select the storage you want to restore to.

You would need to create the volume group on the node it doesn't exist yet. Otherwise, restrict the storage to the node it's actually available on, to not accidentally select it and run into such issues.


You probably should uncheck Shared, except if the LVM is actually already shared across nodes (but it's not available on one of the nodes, so that seems unlikely). The flag does not automagically make a local storage shared, it's just to tell Proxmox VE about already shared ones.
I have restricted disk2tba on node which was originally setup(yourtop). Also unchecked all shared. And second I tried to create volume group on second node PVE but I can't. I am really frustrated with this.
1711186896177.png
1711186835905.png1711186983145.png
 
Last edited:
I have restricted disk2tba on node which was originally setup(yourtop). Also unchecked all shared. And second I tried to create volume group on second node PVE but I can't. I am really frustrated with this.
View attachment 65107
If you don't have any disks available, you can't create a new storage on the node. But you don't need to do it. You can just select a storage you already have as the target storage in the restore dialog.
 
  • Like
Reactions: Blisk
If you don't have any disks available, you can't create a new storage on the node. But you don't need to do it. You can just select a storage you already have as the target storage in the restore dialog.
I would like to restore on my disk SAS3TB in GUI I don't have any option to restore on PVE so I am trying with shell, but still get the same error.
qmrestore --storage SAS3TB /mnt/pve/mybackup/@Recently-Snapshot/GMT+01_2024-03-22_0100/dump/vzdump-qemu-100-2024_03_20-14_24_22.vma.zst 117
1711356244069.png1711356182411.png
 
The storage ID needs to be one that's defined in the storage configuration /etc/pve/storage.cfg and available on the node (e.g. disk2tb). From the screenshot, it doesn't seem like you have defined a storage with ID SAS3TB
 
The storage ID needs to be one that's defined in the storage configuration /etc/pve/storage.cfg and available on the node (e.g. disk2tb). From the screenshot, it doesn't seem like you have defined a storage with ID SAS3TB
Thank you. I really don't understand what else I need to define it and where. When I try to add anything under PVE node under disks I get the same error no disks unused.
1711356833650.png
 
What do you see when you go to the Disks tab in the UI? If the disk already contains a filesystem or similar it will not be allowed to be selected for creating a volume group. If that's the case and you're really sure you don't need that data that's currently on the disk, you can wipe it. Then you should be able to select it for creating a volume group.
 
What do you see when you go to the Disks tab in the UI? If the disk already contains a filesystem or similar it will not be allowed to be selected for creating a volume group. If that's the case and you're really sure you don't need that data that's currently on the disk, you can wipe it. Then you should be able to select it for creating a volume group.
this is what I have under disks. I really don't know why there are 4 disks 10Tb, 4Tb and 3Tb. In real physically there are only 2 each of them.
I can delete those disks 3Tb and 4Tb and 10Tb, because there is nothing on it. When I configured this proxmox server I read it is better to make ZFS disks, because than it is possible to have replication and some other options. So what do you suggest to do?
1711357919290.png
 
this is what I have under disks. I really don't know why there are 4 disks 10Tb, 4Tb and 3Tb. In real physically there are only 2 each of them.
Note that some of the entries shown are partitions, the disks are just the top-level entries in the device tree.
I can delete those disks 3Tb and 4Tb and 10Tb, because there is nothing on it. When I configured this proxmox server I read it is better to make ZFS disks, because than it is possible to have replication and some other options. So what do you suggest to do?
View attachment 65220
Please do not delete any disks! Please check the output of zpool status -v first. That will show you which disks belong to which pools. Please also share/check the /etc/pve/storage.cfg to see which storage ID is associated to which pool.
 
Note that some of the entries shown are partitions, the disks are just the top-level entries in the device tree.

Please do not delete any disks! Please check the output of zpool status -v first. That will show you which disks belong to which pools. Please also share/check the /etc/pve/storage.cfg to see which storage ID is associated to which pool.
"Note that some of the entries shown are partitions, the disks are just the top-level entries in the device tree."
yes you are right I know that I just missed this one and count wrong.
zpool status -v
pool: SAS3TB
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:

NAME STATE READ WRITE CKSUM
SAS3TB ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-36b8ca3a0eb96a6002d80ef050895d54e ONLINE 0 0 0
scsi-36b8ca3a0eb96a6002d818f9b08a3e356 ONLINE 0 0 0

errors: No known data errors

pool: SAS4TB
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:

NAME STATE READ WRITE CKSUM
SAS4TB ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-36b8ca3a0eb96a6002d81b69c089c8088 ONLINE 0 0 0
scsi-36b8ca3a0eb96a6002d81b92f08b645f2 ONLINE 0 0 0

errors: No known data errors

pool: arhiv10tb
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:

NAME STATE READ WRITE CKSUM
arhiv10tb ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-36b8ca3a0eb96a6002d80f42d0c20512c ONLINE 0 0 0
scsi-36b8ca3a0eb96a6002d80f61608b7b67b ONLINE 0 0 0

errors: No known data errors

/etc/pve/storage.cfg

dir: local
path /var/lib/vz
content backup,iso,vztmpl

dir: disk2tb
path /disk2tb
content rootdir,images
prune-backups keep-all=1
shared 0

dir: disk4tb
path /disk4tb
content images
prune-backups keep-all=1
shared 0

cifs: mybackup
path /mnt/pve/mybackup
server 192.168.0.234
share backup
content images,backup
domain NAS165CEE
nodes pve
prune-backups keep-all=1
username backup

lvm: disk2tba
vgname disk2tba
content images,rootdir
nodes yourtop
shared 0

lvm: disk4tba
vgname disk4tba
content rootdir,images
nodes yourtop
shared 0

lvm: disk3tba
vgname pve
content images,rootdir
shared 0

lvm: disk3tb
vgname pve
content images,rootdir
shared 0
 
Okay, so those ZFS pools exist on the host, but are not yet registered in the storage configuration. You should be able to add them by going to Datacenter > Storage > Add > ZFS.
 
  • Like
Reactions: Blisk
Okay, so those ZFS pools exist on the host, but are not yet registered in the storage configuration. You should be able to add them by going to Datacenter > Storage > Add > ZFS.
Thank you, this is it. I didn't know I needed to add disks twice, now I added 3Tb and 4Tb and 10Tb and now VM is restoring on 3Tb disk.
First when I try to add ZFS there was no disks to add but node was wrong when I select PVE node I see all 3 disks to add.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!