How to clone a disk with PVE installed

Airw0lf

Member
Apr 11, 2021
62
3
13
60
it-visibility.net
Hi *.*,

For performance reasons I would like replace the 2-TB SATA-disk with a 2-TB NVME-SSD.

The idea was cloning the SATA-disk to the NVME-SSD with CloneZilla.

This didn't work because the NVME-SSD is about 100-Gbyte smaller then the SATA-disk.
The outcome of lsblk-results shows the difference between the 2: sda versus nvme0n1.
It also shows there is plenty of room for some shrinking.

But I'm not sure if (and to which extend) this will affect the PVE-install.

So the question is: can I shrink the 1.7-TBytes partition with gparted?
Or is there a better approach to clone the disk as this seems to be a LVM-volume?

This assumes that once shrinked with - say - 150 Mbytes, CloneZilla is able to do its magic.

Thank you for any feedback and/or suggestions.


With warm regards - Will

====

Output of lsblk:
Code:
sda                            8:0    0  1.9T  0 disk

├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0  512M  0 part
└─sda3                         8:3    0  1.9T  0 part
  ├─pve-swap                 253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0 15.8G  0 lvm
  │ └─pve-data-tpool         253:4    0  1.7T  0 lvm
  │   ├─pve-data             253:5    0  1.7T  1 lvm
  │   ├─pve-vm--101--disk--0 253:6    0  150G  0 lvm
  │   ├─pve-vm--103--disk--0 253:7    0  120G  0 lvm
  │   ├─pve-vm--103--disk--1 253:8    0    4M  0 lvm
  │   ├─pve-vm--103--disk--2 253:9    0  120G  0 lvm
  │   ├─pve-vm--100--disk--0 253:10   0   16G  0 lvm
  │   ├─pve-vm--100--disk--1 253:11   0   80G  0 lvm
  │   ├─pve-vm--102--disk--0 253:12   0    4M  0 lvm
  │   ├─pve-vm--102--disk--1 253:13   0  100G  0 lvm
  │   ├─pve-vm--104--disk--0 253:14   0  150G  0 lvm
  │   ├─pve-vm--106--disk--0 253:15   0   50G  0 lvm
  │   ├─pve-vm--107--disk--0 253:16   0   50G  0 lvm
  │   └─pve-vm--105--disk--0 253:17   0   20G  0 lvm
  └─pve-data_tdata           253:3    0  1.7T  0 lvm
    └─pve-data-tpool         253:4    0  1.7T  0 lvm
      ├─pve-data             253:5    0  1.7T  1 lvm
      ├─pve-vm--101--disk--0 253:6    0  150G  0 lvm
      ├─pve-vm--103--disk--0 253:7    0  120G  0 lvm
      ├─pve-vm--103--disk--1 253:8    0    4M  0 lvm
      ├─pve-vm--103--disk--2 253:9    0  120G  0 lvm
      ├─pve-vm--100--disk--0 253:10   0   16G  0 lvm
      ├─pve-vm--100--disk--1 253:11   0   80G  0 lvm
      ├─pve-vm--102--disk--0 253:12   0    4M  0 lvm
      ├─pve-vm--102--disk--1 253:13   0  100G  0 lvm
      ├─pve-vm--104--disk--0 253:14   0  150G  0 lvm
      ├─pve-vm--106--disk--0 253:15   0   50G  0 lvm
      ├─pve-vm--107--disk--0 253:16   0   50G  0 lvm
      └─pve-vm--105--disk--0 253:17   0   20G  0 lvm

nvme0n1                      259:0    0  1.8T  0 disk

=====
 
Last edited:
So the question is: can I shrink the 1.7-TBytes partition with gparted?
That is one way to go. You should also be able to shrink your LVM physical volume in the live system, if there is nothing in the way:

Code:
root@proxmox ~ > vgs pve
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   1   0 wz--n- <32,00g 16,00g
 
root@proxmox ~ > pvresize --setphysicalvolumesize 24G /dev/sdb
/dev/sdb: Requested size 24,00 GiB is less than real size 32,00 GiB. Proceed?  [y/n]: y
  WARNING: /dev/sdb: Pretending size is 50331648 not 67108864 sectors.
  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized
 
root@proxmox ~ > vgs pve
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   1   0 wz--n- <24,00g 8,00g

In case there is something in the way, you'll get such an error message:

Code:
root@proxmox ~ > pvresize --setphysicalvolumesize 8G /dev/sdb
/dev/sdb: Requested size 8,00 GiB is less than real size 32,00 GiB. Proceed?  [y/n]: y
  WARNING: /dev/sdb: Pretending size is 16777216 not 67108864 sectors.
  /dev/sdb: cannot resize to 2047 extents as 4095 are allocated.
  0 physical volume(s) resized or updated / 1 physical volume(s) not resized

root@proxmox ~ > vgs pve
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   1   0 wz--n- <24,00g 8,00g

So the idea is to resize your phyiscal volume e.g. to total-space - 110 GB (more than needed) then, resize your partition to the size of the new disk and then resize the physical volume back to the size of the partition.
 
  • Like
Reactions: Airw0lf
That is one way to go. You should also be able to shrink your LVM physical volume in the live system, if there is nothing in the way:

Code:
root@proxmox ~ > vgs pve
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   1   0 wz--n- <32,00g 16,00g
 
root@proxmox ~ > pvresize --setphysicalvolumesize 24G /dev/sdb
/dev/sdb: Requested size 24,00 GiB is less than real size 32,00 GiB. Proceed?  [y/n]: y
  WARNING: /dev/sdb: Pretending size is 50331648 not 67108864 sectors.
  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized
 
root@proxmox ~ > vgs pve
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   1   0 wz--n- <24,00g 8,00g

In case there is something in the way, you'll get such an error message:

Code:
root@proxmox ~ > pvresize --setphysicalvolumesize 8G /dev/sdb
/dev/sdb: Requested size 8,00 GiB is less than real size 32,00 GiB. Proceed?  [y/n]: y
  WARNING: /dev/sdb: Pretending size is 16777216 not 67108864 sectors.
  /dev/sdb: cannot resize to 2047 extents as 4095 are allocated.
  0 physical volume(s) resized or updated / 1 physical volume(s) not resized

root@proxmox ~ > vgs pve
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   1   0 wz--n- <24,00g 8,00g

So the idea is to resize your phyiscal volume e.g. to total-space - 110 GB (more than needed) then, resize your partition to the size of the new disk and then resize the physical volume back to the size of the partition.

Thank you for the quick response.

Just to make sure we are on the same page here:
The examples are based on a physical disk - not one of the logical volumes on the disk.
Would that make any difference on the approach?

FYI: the pv-volume is sda3 and 1.86-TB (according to pvs).
According vgs, the pve-volume (partition?) has the same size => 1.86-TB.

The underlying logical volume called data is 1.71-TB.
It looks like this is the one that could be used for down sizing? Or?


With warm regards - Will
 
Last edited:
Just to make sure we are on the same page here:
The examples are based on a physical disk - not one of the logical volumes on the disk.
It's LVM, so physical volume in a volume group, but thick provisioned with space left.
Without space left, you cannot resize.

Please post the output from pvs.
 
  • Like
Reactions: Airw0lf
Below the output of pvs:
Code:
PV         VG  Fmt  Attr PSize PFree
  /dev/sda3  pve lvm2 a--  1.86t <16.38g

What does this mean?

If I take a look with pvdisplay, it shows:
Code:
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               pve
  PV Size               1.86 TiB / not usable <1.32 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              488250
  Free PE               4192
  Allocated PE          484058
  PV UUID               tl3TW2-wqwo-j9Gw-9G6g-sQUD-hfP3-raIMQD

Based on this I would say that almost everything is allocated due to thick provioning?
 
Last edited:
The free space should be in your thin-provisioned data volume. You have to resize that first before resizing the rest. Please post also your volumes

I assume you mean the output of LVS? Like below?
Code:
LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz--   1.71t             13.11  0.77
  root          pve -wi-ao----  96.00g
  swap          pve -wi-ao----   8.00g
  vm-100-disk-0 pve Vwi-a-tz--  16.00g data        19.94
  vm-100-disk-1 pve Vwi-a-tz--  80.00g data        35.51
  vm-101-disk-0 pve Vwi-aotz-- 150.00g data        40.17
  vm-102-disk-0 pve Vwi-aotz--   4.00m data        0.00
  vm-102-disk-1 pve Vwi-aotz-- 100.00g data        43.10
  vm-103-disk-0 pve Vwi-aotz-- 120.00g data        8.15
  vm-103-disk-1 pve Vwi-aotz--   4.00m data        1.56
  vm-103-disk-2 pve Vwi-aotz-- 120.00g data        50.30
  vm-104-disk-0 pve Vwi-aotz-- 150.00g data        9.73
  vm-105-disk-0 pve Vwi-aotz--  20.00g data        8.90
  vm-106-disk-0 pve Vwi-a-tz--  50.00g data        6.11
  vm-107-disk-0 pve Vwi-aotz--  50.00g data        11.12
 
Another idea that crossed my mind: I have a few small SATA-SSD's lying around - 128-GB en 256-GB.

Would it make sense to have one these as boot-disk for proxmox by cloning these partitions to one of these SSD's?
And clone this 1.7-TBytes data-partition to the NVME-drive? After which I can increase the size to cover for the space?

Would/could something like that work? How would I do that?
Assuming that all the VM's and settings are preserved.
 
Last edited:
Would/could something like that work?
In Linux, there are only the limitations of the person in front of it :p

You also have to consider, that if you can do a longer downtime, you can just backup your VMs, reinstall on the new disks and restore from the backup. This is the easiest migration path possible, everything else will take more time, but could be done (to some extend) asynchronically.
 
  • Like
Reactions: Airw0lf
In Linux, there are only the limitations of the person in front of it :p

You also have to consider, that if you can do a longer downtime, you can just backup your VMs, reinstall on the new disks and restore from the backup. This is the easiest migration path possible, everything else will take more time, but could be done (to some extend) asynchronically.

While I agree with the re-install part: how about the config of the host itself?
Knowing that the boot-part should be on a dedicated SATA-SSD and the VM's on the nvme-disk?

I'm asking this because the backup is only about VM's - not the boot-part of Proxmox itself.
Unless there is a kind of boot-config-file that could copy and restore after the new install.

Please note that I have all 3 disks connected:
- the existing slow SATA disk with the OS-boot and the VM's
- the old/new SATA-SSD for booting the OS
- the new NVME-SSD for the VM's
 
Last edited:
While I agree with the re-install part: how about the config of the host itself?
That depends what you did. In general, using the Proxmox Backup Server will solve all those problems. If you don't want to have another server running, you have to do your external backups yourself. The bare minimum to backup is /etc/pve, but in general /etc is always good to backup externally. Most configuration lives there. If you have multiple disks, you need to have multiple backups and need to know what to restore where and when, which can be hard.

Therefore for single-servers, I always recommend using ZFS and ONE zpool and do regular send/receive of your whole pool to an external disk, so that you have a working backup in case anything goes south.

In both setups, you still have to fix your boot partition (normally UEFI) after recovery, because UEFI was never intended for software-raid systems like ZFS or one-non-raid-disk-setups.

With any setup including data (so all computer systems), I also recommend using multiple disks for redundancy. The only part that fails constantly on an ordinary computer system are the local disks (spinners or SSD), so the best options is to be prepared with redundancy or regular working backups with a known recovery strategy, emphasis on recovery stragey. Backups alone don't help much if you don't know what to do with them.
 
That depends what you did. In general, using the Proxmox Backup Server will solve all those problems. If you don't want to have another server running, you have to do your external backups yourself. The bare minimum to backup is /etc/pve, but in general /etc is always good to backup externally. Most configuration lives there. If you have multiple disks, you need to have multiple backups and need to know what to restore where and when, which can be hard.

Therefore for single-servers, I always recommend using ZFS and ONE zpool and do regular send/receive of your whole pool to an external disk, so that you have a working backup in case anything goes south.

In both setups, you still have to fix your boot partition (normally UEFI) after recovery, because UEFI was never intended for software-raid systems like ZFS or one-non-raid-disk-setups.

With any setup including data (so all computer systems), I also recommend using multiple disks for redundancy. The only part that fails constantly on an ordinary computer system are the local disks (spinners or SSD), so the best options is to be prepared with redundancy or regular working backups with a known recovery strategy, emphasis on recovery stragey. Backups alone don't help much if you don't know what to do with them.

I tend to disagree - a backup server by itself doesn't solve this. Because I still wouldn't have an idea how a practical backup-and-restore routine could look like - regardless if it matches my current setup/situation. Because the manuals of this (Proxmox?) backup "solution" (assuming this is what you had in mind) don't have something like best-practices, just numerous chapters with options and features.

The preferred setup for the boot-sata-SSD is:
sda1 = bios-boot (created by the proxmox installer)
sda2 = EFI-system (created by the proxmox installer)
sda3 = Proxmox swap
sda4 = Promox root and the remaining part of the sata-SSD
nvme = the LVM with the VM-disks/volumes known as pve-data_tmeta and pve-data_tdata.

Alternatively make a swapfile on the root partition.

However, I don't know how to make this happen - other then start with installing Debian or Ubuntu. And install Proxmox on top of that. This is because the Proxmox-installer doesn't support such a customized setup.

Any other suggestion?
 
Last edited:
This is because the Proxmox-installer doesn't support such a customized setup.
You don't need swap on a partition, just use a volume for that. PVE-LVM is a good start, just reconfigure your system from there: delete the actual data volume for storing the VMs, create swap. Create another pv/vg on nvme and store everything there.
 
Thank you for the tips.

I took a different path:
- Made backups of the configs and exported the data of the apps within the current VM's
- Installed Debian and Proxmox on a SATA-SSD
- Started from scratch on the new nvme by creating the VM's
- Installing the app, restore the config and imported the data.

We needed to do this anyhow as part of a compliance check on disaster recovery.
Which is now proofed to be successful.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!