Ho wo add existing vms to new instalation

mov45

New Member
Mar 29, 2022
26
0
1
Hi I have got a new instalation of proxmox and old disk with my vms, do not see option to transfer this machines to new disk (m.2), Disk is mounted and it is LVM2 but do not now how to copy this machines to the new disk and also add to my node
 

Attachments

  • prox.png
    prox.png
    147.7 KB · Views: 17
Is this old disk including the PVE install or just the VMs? If the latter, you need to manually construct all guest configuration which is not fun, yet it can be done. Containers are easier than VMs and I don't think there is documentation to do this, normally you would backup your guests and restore them in the new environment.

If you're lucky and the previous install is still present, just plug the disk into another computer, bootup PVE and backup your guests.
 
This is Old disk where vms are stored. The instalation of proxmox is new, i done backup before i was reinstall and everithing was 100% Ok, now i try to rektorem from backup and everything is corrupted, do not know why... so need revower this was from hdd but do not know how...
 
"Maybe one can salvage the configuration" can you expand the subjest - how to do...
 
Hi so this is looks like this, all images (more then 10), when I created this all was 100% seccess:
estore vma archive: zstd -q -d -c /mnt/pve/QNAP/dump/vzdump-qemu-124-2023_09_02-18_06_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp1313329.fifo - /var/tmp/vzdumptmp1313329 CFG: size: 411 name: qemu-server.conf DEV: dev_id=1 size: 34359738368 devname: drive-scsi0 CTIME: Sat Sep 2 18:06:41 2023 Logical volume "vm-101-disk-0" created. new volume ID is 'local-lvm:vm-101-disk-0' map 'drive-scsi0' to '/dev/pve/vm-101-disk-0' (write zeros = 0) progress 1% (read 343605248 bytes, duration 1 sec) progress 2% (read 687210496 bytes, duration 4 sec) progress 3% (read 1030815744 bytes, duration 7 sec) progress 4% (read 1374420992 bytes, duration 9 sec) progress 5% (read 1718026240 bytes, duration 45 sec) progress 6% (read 2061631488 bytes, duration 46 sec) progress 7% (read 2405236736 bytes, duration 47 sec) progress 8% (read 2748841984 bytes, duration 48 sec) progress 9% (read 3092381696 bytes, duration 50 sec) progress 10% (read 3435986944 bytes, duration 51 sec) progress 11% (read 3779592192 bytes, duration 52 sec) progress 12% (read 4123197440 bytes, duration 53 sec) progress 13% (read 4466802688 bytes, duration 54 sec) progress 14% (read 4810407936 bytes, duration 55 sec) progress 15% (read 5154013184 bytes, duration 56 sec) progress 16% (read 5497618432 bytes, duration 58 sec) progress 17% (read 5841158144 bytes, duration 61 sec) progress 18% (read 6184763392 bytes, duration 62 sec) progress 19% (read 6528368640 bytes, duration 63 sec) progress 20% (read 6871973888 bytes, duration 64 sec) progress 21% (read 7215579136 bytes, duration 64 sec) progress 22% (read 7559184384 bytes, duration 65 sec) progress 23% (read 7902789632 bytes, duration 66 sec) progress 24% (read 8246394880 bytes, duration 68 sec) progress 25% (read 8589934592 bytes, duration 69 sec) progress 26% (read 8933539840 bytes, duration 69 sec) progress 27% (read 9277145088 bytes, duration 70 sec) progress 28% (read 9620750336 bytes, duration 72 sec) progress 29% (read 9964355584 bytes, duration 73 sec) progress 30% (read 10307960832 bytes, duration 74 sec) progress 31% (read 10651566080 bytes, duration 75 sec) progress 32% (read 10995171328 bytes, duration 76 sec) progress 33% (read 11338776576 bytes, duration 77 sec) progress 34% (read 11682316288 bytes, duration 77 sec) progress 35% (read 12025921536 bytes, duration 78 sec) progress 36% (read 12369526784 bytes, duration 79 sec) progress 37% (read 12713132032 bytes, duration 80 sec) progress 38% (read 13056737280 bytes, duration 82 sec) progress 39% (read 13400342528 bytes, duration 83 sec) progress 40% (read 13743947776 bytes, duration 84 sec) progress 41% (read 14087553024 bytes, duration 85 sec) progress 42% (read 14431092736 bytes, duration 85 sec) progress 43% (read 14774697984 bytes, duration 86 sec) progress 44% (read 15118303232 bytes, duration 87 sec) progress 45% (read 15461908480 bytes, duration 89 sec) progress 46% (read 15805513728 bytes, duration 90 sec) progress 47% (read 16149118976 bytes, duration 91 sec) progress 48% (read 16492724224 bytes, duration 92 sec) progress 49% (read 16836329472 bytes, duration 94 sec) progress 50% (read 17179869184 bytes, duration 95 sec) progress 51% (read 17523474432 bytes, duration 95 sec) progress 52% (read 17867079680 bytes, duration 97 sec) progress 53% (read 18210684928 bytes, duration 98 sec) progress 54% (read 18554290176 bytes, duration 99 sec) progress 55% (read 18897895424 bytes, duration 100 sec) progress 56% (read 19241500672 bytes, duration 102 sec) _02-18_06_40.vma.zst : Decoding error (36) : Data corruption detected vma: restore failed - short vma extent (1275348 < 3756544) /bin/bash: line 1: 1313340 Exit 1 zstd -q -d -c /mnt/pve/QNAP/dump/vzdump-qemu-124-2023_09_02-18_06_40.vma.zst 1313341 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp1313329.fifo - /var/tmp/vzdumptmp1313329 Logical volume "vm-101-disk-0" successfully removed. temporary volume 'local-lvm:vm-101-disk-0' sucessfuly removed no lock found trying to remove 'create' lock error before or during data restore, some or all disks were not completely restored. VM 101 state is NOT cleaned up. TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/QNAP/dump/vzdump-qemu-124-2023_09_02-18_06_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp1313329.fifo - /var/tmp/vzdumptmp1313329' failed: exit code 133
 
Last edited:
Hi so this is looks like this, all images (more then 10), when I created this all was 100% seccess:
Okay thank you. And that is has been written with 100% success does not mean that it can be restored. Therefore you NEED to do restore tests. Without this, this is not a backup. Most people don't know this and think that just having some green check mark is shown means that the data can actually be restored ... always check everything. This is totally independed of PVE and applies to EVERY backup software out there. Therefore a special "integrity test" is programmed into Proxmox Backup Server that does this for you.

Yet back to the problem at hand: You can extract the VM configuration, it is already read as shown in the output:

Code:
zstd -dc /mnt/pve/QNAP/dump/vzdump-qemu-124-2023_09_02-18_06_40.vma.zst | vma config -

With that, you have your configuration that you can use to build your VM.
 
this only shows config of machine:
zstd -dc /mnt/pve/QNAP/dump/vzdump-qemu-124-2023_09_02-18_06_40.vma.zst | vma config -

root@mila:~# zstd -dc /mnt/pve/QNAP/dump/vzdump-qemu-124-2023_09_02-18_06_40.vma.zst | vma config -
boot: order=scsi0;net0
cores: 1
memory: 2048
meta: creation-qemu=7.1.0,ctime=1686246385
name: DNS-14-full-work
net0: virtio=B6:04:37:0A:84:C1,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: drive2:vm-124-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=cfa56a33-d418-4521-9ef2-1e0af8ef7c5a
sockets: 1
vmgenid: 8753dffd-b51f-4645-acdd-e5ae2db32e98
#qmdump#map:scsi0:drive-scsi0:drive2:raw:
root@mila:~# ^C
root@mila:~#
 
I also have a old disk with machines but do not know how to copy to new proxmox
 

Attachments

  • Bez tytułu.png
    Bez tytułu.png
    253.6 KB · Views: 8
Ok, lucky to me have the old proxmox system so I done backups again without commpresion connect the new disk and restore backups and works, I am not 100% happy becouse do not know how to move machines from old system to new if system disk calapse.
 
You can save the lxc/qemu configs under /etc/pve/, they are easy to ready and understand.
 
Last edited:
https://pve.proxmox.com/wiki/Manual:_pct.conf

https://pve.proxmox.com/wiki/Manual:_qm.conf

agent: 1
balloon: 512
bios: ovmf
bootdisk: scsi0
cores: 1
cpu: host
efidisk0: rpool-store:100/vm-100-disk-1.qcow2,size=128K
ide2: none,media=cdrom
machine: q35
memory: 2048
name: ubuntu
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0,firewall=1
numa: 1
ostype: l26
scsi0: rpool-store:100/vm-100-disk-0.qcow2,size=51G
scsihw: virtio-scsi-pci
smbios1: uuid=11f826dd-583e-42e0-b55c-e74783a6ddcc
sockets: 1
vga: qxl
vmgenid: dec00000-9b8a-481f-a2be-8008f111e0fh
vmstatestorage: rpool-store
 
OK, my problem is, I connect LVM disk with machines but do not know where I can find at the system machines images - direct path
LVM is new to me - that is the basic problem
 
So the storage.cfg is a file so how to enter to this disk?
 

Attachments

  • Bez tytułu.png
    Bez tytułu.png
    51.6 KB · Views: 7
Thank you, so you can just create a VM configuration (make sure the VM 124 is NOT present):

Code:
cat > /etc/pve/qemu-server/124.conf <<EOF
boot: order=scsi0;net0
cores: 1
memory: 2048
meta: creation-qemu=7.1.0,ctime=1686246385
name: DNS-14-full-work
net0: virtio=B6:04:37:0A:84:C1,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: drive2:vm-124-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=cfa56a33-d418-4521-9ef2-1e0af8ef7c5a
sockets: 1
vmgenid: 8753dffd-b51f-4645-acdd-e5ae2db32e98
EOF

and backup it as a test (don't start). If everything works AFTER the backup, restore the backup to a new VM. If everything works, you just move the disk to your final destination and everything should be fine. Repeat for each VM.
 
Ok, some basics on lvm:

LVM has 3 tiers:

1. physical volumes (disks or partitions), display with pvs or pvdisplay

2. Volume Groops, a VG can span one or more Partitions / Disks, display with vgs or vgdisplay

3. Logical Volumes, display with lvs or lvdisplay

In your System the logical volumes are device files under /dev/mapper/*

They are Blockdevices and can be used like partitions or disks


In case of your Proxmox system, a logical volume is used as a disk image for your vm
 
  • Like
Reactions: mov45

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!