[SOLVED] Restoring from old proxmox server

PXMX1001

New Member
Nov 23, 2024
12
1
3
Hi all,

I built a new proxmox server with completely new hardware. I moved the old 1tb ssd into the new server with the new 2tb ssd with a new proxmox os installed. During proxmox installation, It wanted me to rename all the old 1tb ssd vm's as pve--old--####. I let it do that. I need to know how to transfer all the old virtual machines onto the new server. Please help me do this. I will show you the output from 'lsblk'


Code:
nvme0n1                                     259:0    0   1.9T  0 disk
├─nvme0n1p1                                 259:1    0  1007K  0 part
├─nvme0n1p2                                 259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                                 259:3    0   1.9T  0 part
  ├─pve-swap                                252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                                252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta                          252:2    0  15.9G  0 lvm 
  │ └─pve-data                              252:8    0   1.7T  0 lvm 
  └─pve-data_tdata                          252:3    0   1.7T  0 lvm 
    └─pve-data                              252:8    0   1.7T  0 lvm 
nvme1n1                                     259:4    0 931.5G  0 disk
├─nvme1n1p1                                 259:5    0  1007K  0 part
├─nvme1n1p2                                 259:6    0     1G  0 part
└─nvme1n1p3                                 259:7    0 930.5G  0 part
  ├─pve--OLD--60EDF728-swap                 252:4    0     8G  0 lvm 
  ├─pve--OLD--60EDF728-root                 252:5    0    96G  0 lvm 
  ├─pve--OLD--60EDF728-data_tmeta           252:6    0   8.1G  0 lvm 
  │ └─pve--OLD--60EDF728-data-tpool         252:9    0 794.3G  0 lvm 
  │   ├─pve--OLD--60EDF728-data             252:10   0 794.3G  1 lvm 
  │   ├─pve--OLD--60EDF728-vm--101--disk--0 252:11   0     4M  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--101--disk--1 252:12   0   300G  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--201--disk--3 252:13   0 390.1G  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--201--disk--0 252:14   0     4M  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--100--disk--0 252:15   0     4M  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--100--disk--1 252:16   0    32G  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--105--disk--0 252:17   0    20G  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--106--disk--0 252:18   0     4M  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--106--disk--1 252:19   0    20G  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--102--disk--0 252:20   0     4M  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--102--disk--1 252:21   0    50G  0 lvm 
  │   ├─pve--OLD--60EDF728-vm--104--disk--0 252:22   0     4M  0 lvm 
  │   └─pve--OLD--60EDF728-vm--104--disk--1 252:23   0    32G  0 lvm 
  └─pve--OLD--60EDF728-data_tdata           252:7    0 794.3G  0 lvm 
    └─pve--OLD--60EDF728-data-tpool         252:9    0 794.3G  0 lvm 
      ├─pve--OLD--60EDF728-data             252:10   0 794.3G  1 lvm 
      ├─pve--OLD--60EDF728-vm--101--disk--0 252:11   0     4M  0 lvm 
      ├─pve--OLD--60EDF728-vm--101--disk--1 252:12   0   300G  0 lvm 
      ├─pve--OLD--60EDF728-vm--201--disk--3 252:13   0 390.1G  0 lvm 
      ├─pve--OLD--60EDF728-vm--201--disk--0 252:14   0     4M  0 lvm 
      ├─pve--OLD--60EDF728-vm--100--disk--0 252:15   0     4M  0 lvm 
      ├─pve--OLD--60EDF728-vm--100--disk--1 252:16   0    32G  0 lvm 
      ├─pve--OLD--60EDF728-vm--105--disk--0 252:17   0    20G  0 lvm 
      ├─pve--OLD--60EDF728-vm--106--disk--0 252:18   0     4M  0 lvm 
      ├─pve--OLD--60EDF728-vm--106--disk--1 252:19   0    20G  0 lvm 
      ├─pve--OLD--60EDF728-vm--102--disk--0 252:20   0     4M  0 lvm 
      ├─pve--OLD--60EDF728-vm--102--disk--1 252:21   0    50G  0 lvm 
      ├─pve--OLD--60EDF728-vm--104--disk--0 252:22   0     4M  0 lvm 
      └─pve--OLD--60EDF728-vm--104--disk--1 252:23   0    32G  0 lvm
 
This was the working command:

qm remote-migrate 101 101 'apitoken=PVEAPIToken=root@pam!root=<APITOKEN>,host=192.168.1.197,fingerprint=<FINGERPRINT>' --target-bridge vmbr0 --target-storage local-lvm --online 1
 
Also, how to obtain fingerprint and API key:

1) API Key
- Create API key in datacenter --> Permissions --> API tokens
- Create administrator permissions for API key in datacenter --> permissions --> Add permissions
- Use the secret key created and place in the command

2) Fingerprint
- Obtain the fingerprint by initially running the command without fingerprint. Example below:

Code:
qm remote-migrate 101 101 'apitoken=PVEAPIToken=root@pam!root=<APITOKEN>,host=192.168.1.197' --target-bridge vmbr0 --target-storage local-lvm --online 1

- It will give the fingerprint and then copy and paste it into the code I provided above
 
  • Like
Reactions: Johannes S
Did you get your new hardware working? If it's just a single node, the easiest way is to back up all VM's, install the OS on your new SSD, then restore each VM from your backup.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!