Move proxmox to new disk

robb01

Member
Oct 1, 2022
30
0
6
Thanks for Proxmox.
Bit of a novice question here.
I recently installed PM on to a 120 GB SSD on a NUC7 and set up 3 containers.
Now I want to move that setup onto a 1.8 TB SSD and have tried to copy the partition structure in the new drive with a view to using dd to write a complete copy of the Proxmox instance onto the bigger drive.
Proxmox reports lsblk as follows:
```
Code:
lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 111.8G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part /boot/efi
└─sda3                         8:3    0 111.3G  0 part
  ├─pve-swap                 253:0    0     1G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0  27.8G  0 lvm  /
  ├─pve-data_tmeta           253:2    0     1G  0 lvm
  │ └─pve-data-tpool         253:4    0  66.7G  0 lvm
  │   ├─pve-data             253:5    0  66.7G  1 lvm
  │   ├─pve-vm--100--disk--0 253:6    0     4M  0 lvm
  │   ├─pve-vm--100--disk--1 253:7    0    32G  0 lvm
  │   ├─pve-vm--101--disk--0 253:8    0     8G  0 lvm
  │   └─pve-vm--103--disk--0 253:9    0     8G  0 lvm
  └─pve-data_tdata           253:3    0  66.7G  0 lvm
    └─pve-data-tpool         253:4    0  66.7G  0 lvm
      ├─pve-data             253:5    0  66.7G  1 lvm
      ├─pve-vm--100--disk--0 253:6    0     4M  0 lvm
      ├─pve-vm--100--disk--1 253:7    0    32G  0 lvm
      ├─pve-vm--101--disk--0 253:8    0     8G  0 lvm
      └─pve-vm--103--disk--0 253:9    0     8G  0 lvm
```
but blkid reports /dev/sda3 as Type lvm2_member. Gparted only offers Type lvm2_pv.
I am not familiar with the intricacies of LVMs so would it be safe/workable to dd write /dev/sda3 to the new drive with format ext4 or lvm2_pv?
I have existing data on the 1.8 TB drive so did not want to just install a new Proxmox instance and risk losing the data.
TIA
 
Last edited:
Or could I take a backup and restore it to a partition on the other (1.8 TB) drive with some process?
 
What should work is copying the whole disk on block level.
But then you would need to do a lot of CLI stuff to be able to make use of the bigger size. For example editing the partition table. Expanding the third partition. Expanding the PV, VG and LVs and thin pool.

Might be faster to just backup those LXCs, install a new PVE and restore the LXCs. At atleast in case you didn't customise your PVE installation that much.
 
Last edited:
Thanks @Dunuin. My preparation was to leave the existing data in its separate partition and replicate the partition sizes for the boot and pve i.e. 512MB and 111.3GB as separate partitions. So I would have done dd if=/dev/sda2 of=/dev/sdb2 with both drives usb connected on my linux machine. Similarly for the OS partition dd if=/dev/sda3 of=/dev/sdb3.
But is that what you mean by block copy?
 
With full "copying the whole disk on block level" I meant more like a "dd if=/dev/sda of=/dev/sdb".
 
Yes I figured so. I don't want to do that as I would lose my data.
I have done the dd if=/dev/sda2 of=/dev/sdb2 for the boot partition and it looks ok so I might attempt the same thing with the partition for the pve OS. If it boots, it boots I guess.
The other option would be to follow this https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster and install within a Debian OS and then restore backups of the containers.
 
Why don't you just backup the data that's already on the bigger SSD and move that data back later? You should have a backup of all data anyway, especially when modifying any partitions. Because one typo with dd or the partitioning tool of your choice and all data is gone.
 
Last edited:
  • Like
Reactions: Neobin
After doing the dd write it was apparent that the data and the structure had copied across ok. The problem, as anticipated, was getting grub satisfied. Tried a number of things but either got "Failed to get canonical path of /cow" or it couldn't find /boot/efi.
At this point I changed course and followed the above installation steps from a Debian Buster starting point including restoring the backups from each container.
This has worked ok except I had to adjust networking configuration for device eno1 instead of eth0. Also found my /etc/hostname file got over-written at one stage that messed things up. If things are stable after a few days I will stick with what I have otherwise I will resort to your @Dunuin 's recommendation which would be more robust.
Thanks for your support.
 
The other option would be to follow this https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster and install within a Debian OS
At this point I changed course and followed the above installation steps from a Debian Buster starting point

FYI: Debian Buster is PVE 6 and both are EOL.
The recent ones are Debian Bullseye and PVE 7:
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye

[0] https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!