Move Proxmox host to new disk - running out of space

Zac8236

Member
Apr 27, 2018
20
0
21
24
Hello,

At the moment I have 3 hdds on a hardware raid card as raid 5. This gives me about 260gb of storage available to Proxmox.

60gb of this is used for ISOs etc. the rest is for the VM LXC storage. I am running out of room.

The Proxmox host has some customisation so I can pass a gpu through and get logs into grafana in a container.

I'm looking at getting a 2tb Raid 1 set up to replace the raid 5.

How can I move the Proxmox host, all VMs and containers to the new virtual hard drive?

If possible I don't want to have to reinstall Proxmox as I would have to configure the GPU pass through and logs again, hence I'd like to copy the whole system.

I hope you understand what I'm after, I can't word it very well!

Thanks,
Zac.
 
I'm looking at getting a 2tb Raid 1 set up to replace the raid 5.
Are yo going to have both raidsets up at the same time? if so, its just a matter of booting with ping or clonezilla livecd and cloning the volume from one to the other.

Alternatively, if you're keeping the old raidset, you dont need to move anything, simply add the new volume to your existing lvm.
 
  • Like
Reactions: Zac8236
Are yo going to have both raidsets up at the same time? if so, its just a matter of booting with ping or clonezilla livecd and cloning the volume from one to the other.

Alternatively, if you're keeping the old raidset, you dont need to move anything, simply add the new volume to your existing lvm.


Thanks for your reply. I could temporary have both RAIDs running yes, but ultimately in the end only want 1 running as I have other disks passed into VMs and don't have enough physical room for them all in the one server .

Will clonezilla cope with the difference in size between the 2 RAIDs?

How do I expand the local-pve storage for VMs once I boot from the new RAID set?

Thanks,
Zac
 
Will clonezilla cope with the difference in size between the 2 RAIDs?
Generally yes. If you're using zfs you may be better off using dd instead- eg boot from a livecd and use dd n eg:

Code:
# dd if=/dev/sda of=/dev/sdb bs=4096
where sda is the block device containing your original and sdb is the new, empy device. This will assure all partitions are copied, and a resilver would add all the new (previously "unused") space

If you're using lvm, once you finish cloning reboot into the system, and then you'd need to do the following after cloning:

Code:
 # pvresize /dev/sda
This will add the new (previously "unused") space to your volume group, and will be available to use for virtual disk space. if you want to extend any existing logical volumes, you'll be able to do so as well.
 
  • Like
Reactions: Zac8236
Generally yes. If you're using zfs you may be better off using dd instead- eg boot from a livecd and use dd n eg:

Code:
# dd if=/dev/sda of=/dev/sdb bs=4096
where sda is the block device containing your original and sdb is the new, empy device. This will assure all partitions are copied, and a resilver would add all the new (previously "unused") space

If you're using lvm, once you finish cloning reboot into the system, and then you'd need to do the following after cloning:

Code:
 # pvresize /dev/sda
This will add the new (previously "unused") space to your volume group, and will be available to use for virtual disk space. if you want to extend any existing logical volumes, you'll be able to do so as well.

Thanks again!

Pretty sure I'm not using ZFS as my physical RAID card is managing it all.

I think
Code:
 # pvresize /dev/sda
is the command I need to get it all to run smoothly!

I'll report back when I get the new drives and set them up .

Cheers
 
Just take care when you clone a disk using clonezilla or dd, if LVM is used (which is the case with PM) you'll end with a duplicate LVM volume group that can be a source of issue ...
I'd rather disconnect/mask the "old" RAID before booting on the clone one.

Same advice for OS (not the case here but still ...) using UUID to mount partitions, the UUIDs are duplicated during the clone.
 
  • Like
Reactions: Zac8236
Just take care when you clone a disk using clonezilla or dd, if LVM is used (which is the case with PM) you'll end with a duplicate LVM volume group that can be a source of issue ...
I'd rather disconnect/mask the "old" RAID before booting on the clone one.

Same advice for OS (not the case here but still ...) using UUID to mount partitions, the UUIDs are duplicated during the clone.


That makes sense as the volumes are logical.

As soon as I've cloned the RAID I'll unmount the old one before booting. Do you think that it'll be okay then?

Thanks
 
I have followed this steps to move my pve to a larger disk but I was lost because this was my situation on a 500Gb nvme disk:
Code:
root@proxmox:~# fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: CT500P1SSD8
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 47CB948D-70F3-4083-AE5C-6EC515517B10

Device           Start       End   Sectors  Size Type
/dev/nvme1n1p1      34      2047      2014 1007K BIOS boot
/dev/nvme1n1p2    2048   1050623   1048576  512M EFI System
/dev/nvme1n1p3 1050624 209715200 208664577 99.5G Linux LVM

So not all the space was taken so I had to expand the partition:
Code:
root@proxmox:~# parted /dev/nvme1n1
GNU Parted 3.4
Using /dev/nvme1n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: CT500P1SSD8 (nvme)
Disk /dev/nvme1n1: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  538MB   537MB   fat32              boot, esp
 3      538MB   107GB   107GB                      lvm

(parted) resizepart 3 100%
(parted) quit
Information: You may need to update /etc/fstab.

root@proxmox:~# fdisk -l /dev/nvme1n1 | grep ^/dev
/dev/nvme1n1p1      34      2047      2014  1007K BIOS boot
/dev/nvme1n1p2    2048   1050623   1048576   512M EFI System
/dev/nvme1n1p3 1050624 976773134 975722511 465.3G Linux LVM
then remove the needles data partition:
Code:
root@proxmox:~# lvremove /dev/pve/data
Do you really want to remove active logical volume pve/data? [y/n]: y
  Logical volume "data" successfully removed
and finally expand to the available space the root partition:

Code:
root@proxmox:~# lvresize -l +100%FREE /dev/pve/root
  Size of logical volume pve/root changed from <91.50 GiB (23423 extents) to <457.26 GiB (117058 extents).
  Logical volume pve/root successfully resized.
root@proxmox:~# resize2fs /dev/mapper/pve-root
resize2fs 1.46.2 (28-Feb-2021)
Filesystem at /dev/mapper/pve-root is mounted on /; on-line resizing required
old_desc_blocks = 12, new_desc_blocks = 58
The filesystem on /dev/mapper/pve-root is now 119867392 (4k) blocks long.
I hope it will help someone, it took a while to me.
 
Just to add to the possibilities, this is my approach:
Installed Proxmox on SSD with NVME installed on hardware as well.
I want Proxmox to run on SSD, store ISOs on SSD and run VMs from NVME (just like topic starter).

(if you want to test this, please make sure you make backups of your data and/or use an empty system.)
Further, I am no Proxmox expert and found this method by trial and error, if something does not make sense or should be avoided, please reply and pass your best advice.

After initial installation, the situation (Datacenter/node/Disks) looks like (my node is called pve3):

1661531747645.png

I begin with removing the storage for the VM disk images, which can be removed through Datacenter/Storage):
1661524569143.png
Make sure to select the local-lvm and press Remove.

Open a Shell on your node to remove the Logical Volume data (which contained local-lvm):
Code:
root@pve3:~# lvremove /dev/pve/data
Do you really want to remove active logical volume pve/data? [y/n]: y
  Logical volume "data" successfully removed
root@pve3:~#

To maximize the space available for ISO's, we can now extend the Logical Volume root :
Code:
root@pve3:~# lvresize -l +100%FREE /dev/pve/root
  Size of logical volume pve/root changed from 58.00 GiB (14848 extents) to 690.14 GiB (176676 extents).
  Logical volume pve/root successfully resized.
root@pve3:~#

after which you have to run the resize command:
Code:
root@pve3:~# resize2fs /dev/mapper/pve-root
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/mapper/pve-root is mounted on /; on-line resizing required
old_desc_blocks = 8, new_desc_blocks = 87
The filesystem on /dev/mapper/pve-root is now 180916224 (4k) blocks long.

root@pve3:~#

I then add the NVME to the (default) Volume Group pve:
Code:
root@pve3:~# vgextend pve /dev/nvme0n1
  Physical volume "/dev/nvme0n1" successfully created.
  Volume group "pve" successfully extended
root@pve3:~#

After which you can create a new thin-pool data in Volume Group pve:
Code:
lvcreate -l 100%FREE --thinpool data pve
  Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  Logical volume "data" created.
root@pve3:~#

Finally, going back to the GUI under Datacenter/Storage Add new LVM-Thin to store your VM Disk-images:
1661532231428.png
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!