change Proxmox boot disk from single disk to mirrored zfs / lvm

rakurtz

Member
Jan 23, 2021
31
5
8
Germany
Hello everyone,

what is the best way to exchange the disk on which proxmox is currently installed?

  1. actually installed on single 1 TB SSD, but just using partition of 150 GB
  2. want to move to 2 mirrored 400 GB SSDs
  3. Don't care if it the new setup is ZFS or LVM

The actual setup:
a couple of weeks ago i set up a cluster of two Proxmox hosts (two physical Dell servers). I store all VMs on a ZFS raid-z2 pool on /dev/sdb - /dev/sdi. The OS is installed on a 1 TB ssd on /dev/sda:

Code:
root@one:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 894.3G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part /boot/efi
├─sda3               8:3    0 149.5G  0 part
│ ├─pve-root       253:0    0  37.3G  0 lvm  /
│ ├─pve-data_tmeta 253:1    0     1G  0 lvm 
│ │ └─pve-data     253:3    0  94.3G  0 lvm 
│ └─pve-data_tdata 253:2    0  94.3G  0 lvm 
│   └─pve-data     253:3    0  94.3G  0 lvm 
├─sda4               8:4    0    64G  0 part
└─sda5               8:5    0    64G  0 part
...
...

(sda4 and sda5 are not used, i can easily delete them)

When we got these to physical servers, we were still waiting for the shipment of 400 GB SSDs, on which we planned to install the OS (two of them as a mirror for each host. Installing the OS on this 1 TB SSD /dev/sda was just a temporarily solution.


Possible ways of migration

1. Install Proxmox fresh and clean from scratch

-> will it easily find the existing ZFS Pools on /dev/sdb - /dev/sdi ? with zfs import ?
-> am i correctly assuming that i only have to install networking, linux user and certificates and all the rest is coming by integrating the host into the cluster with the other host?
-> is it easier to first backup /etc/pve and then copy it to the freshly setup host? Will this interfere with the cluster afterwards?

2. Using dd or clonezilla to copy the old /dev/sda to a new disk

-> Problem: the new disks are smaller
-> how to have two of the new disks running as a mirror afterwards?


Has anyone done something similar yet? Thank you for your advices!
 
Do you have a hardware RAID controller available to form a mirror for the system disks? If not, you should use ZFS to have a mirrored system disk. That means installing from scratch.

What about the other ZFS pool? Are the guests replicated between the two nodes? Can you temporarily move all guests to only one node?

For quorum (majority of votes) in the cluster it would be good if you could temporarily add a third node to the cluster to still have quorum when you temporarily remove a node from the cluster to reinstall it. Once you have your 2 node cluster the way you want it, you can remove the third node and add a QDevice for a third vote. The problem is, that if the cluster members change, you want to first remove the qdevice config form the nodes, thus not helping during the resetup situation.
 
Thanks for replying, Aaron.
  1. I am going to set up a zfs-mirror for Proxmox OS.
  2. I could add more nodes to that cluster: If i add two nodes, i would not need any QDevice, am i right? In this case, there would always be at least three nodes available.
So i guess there is no way of not needing to install from scratch? For example mirroring the existing /dev/sda (which holds Proxmox) with a new disk, then remove the actual /dev/sda and add a third disk as a mirror?
 
  1. I could add more nodes to that cluster: If i add two nodes, i would not need any QDevice, am i right? In this case, there would always be at least three nodes available.
Sure, if you can permanently have 3 nodes in the cluster that would be better. There are situations where you can't and that's where the QDevice mechanism can be quite handy.

So i guess there is no way of not needing to install from scratch? For example mirroring the existing /dev/sda (which holds Proxmox) with a new disk, then remove the actual /dev/sda and add a third disk as a mirror?
No, currently from the `lsblk` output it seems like you installed it PVE with either ext4 or xfs as root file system which makes the installer set up an LVM on the root disks. That is not compatible with ZFS.

Oh, and be aware that the storage `local-lvm` will change to `local-zfs` if you install to ZFS.
 
  • Like
Reactions: rakurtz

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!