[Help] Changing PVE LVM Boot Drive

Altorvo

New Member
Nov 30, 2024
8
4
3
Hi All,

What are my options/best practices for changing my Proxmox VE boot drive? I've read a few posts regarding this subject, but I'm still stuck. None have covered all of the questions I have, so hoping a couple of people will grace me with their experience.
I'm particularly interested in whether a Proxmox reinstallation will recognise the existing ZFS pool regardless of sata port, and whether the current LXC backups will work when moving to ZFS boot pool instead of LVM? (More details below)

Current Set-up:
  • As shown below, I currently have 1 nvme 500GB drive, which holds all PVE boot data, and I also mounted the local-lvm to most of my LXCs to easily access app config data (which is smb shared so I can access from any local device - I know not the best system, but it's what I had to hand).
  • I also have a raidz1-0 ZFS pool (sda,sdb,sdc).
Code:
root@avprox:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0  14.6T  0 disk
├─sda1                         8:1    0  14.6T  0 part
└─sda9                         8:9    0     8M  0 part
sdb                            8:16   0  14.6T  0 disk
├─sdb1                         8:17   0  14.6T  0 part
└─sdb9                         8:25   0     8M  0 part
sdc                            8:32   0  14.6T  0 disk
├─sdc1                         8:33   0  14.6T  0 part
└─sdc9                         8:41   0     8M  0 part
nvme0n1                      259:0    0 465.8G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 464.8G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   3.4G  0 lvm 
  │ └─pve-data-tpool         252:4    0 337.9G  0 lvm 
  │   ├─pve-data             252:5    0 337.9G  1 lvm 
  │   ├─pve-vm--100--disk--0 252:6    0     8G  0 lvm 
  │   ├─pve-vm--100--disk--1 252:7    0   250G  0 lvm 
  │   ├─pve-vm--101--disk--0 252:8    0     9G  0 lvm 
  │   ├─pve-vm--103--disk--0 252:9    0     3G  0 lvm 
  │   ├─pve-vm--201--disk--0 252:10   0     9G  0 lvm 
  │   ├─pve-vm--300--disk--0 252:11   0     9G  0 lvm 
  │   ├─pve-vm--102--disk--0 252:12   0     5G  0 lvm 
  │   ├─pve-vm--200--disk--1 252:13   0    20G  0 lvm 
  │   ├─pve-vm--200--disk--2 252:14   0  31.1T  0 lvm 
  │   ├─pve-vm--200--disk--3 252:15   0   250G  0 lvm 
  │   ├─pve-vm--104--disk--0 252:16   0     3G  0 lvm 
  │   ├─pve-vm--301--disk--0 252:17   0     7G  0 lvm 
  │   └─pve-vm--105--disk--0 252:18   0     5G  0 lvm 
  └─pve-data_tdata           252:3    0 337.9G  0 lvm 
    └─pve-data-tpool         252:4    0 337.9G  0 lvm 
      ├─pve-data             252:5    0 337.9G  1 lvm 
      ├─pve-vm--100--disk--0 252:6    0     8G  0 lvm 
      ├─pve-vm--100--disk--1 252:7    0   250G  0 lvm 
      ├─pve-vm--101--disk--0 252:8    0     9G  0 lvm 
      ├─pve-vm--103--disk--0 252:9    0     3G  0 lvm 
      ├─pve-vm--201--disk--0 252:10   0     9G  0 lvm 
      ├─pve-vm--300--disk--0 252:11   0     9G  0 lvm 
      ├─pve-vm--102--disk--0 252:12   0     5G  0 lvm 
      ├─pve-vm--200--disk--1 252:13   0    20G  0 lvm 
      ├─pve-vm--200--disk--2 252:14   0  31.1T  0 lvm 
      ├─pve-vm--200--disk--3 252:15   0   250G  0 lvm 
      ├─pve-vm--104--disk--0 252:16   0     3G  0 lvm 
      ├─pve-vm--301--disk--0 252:17   0     7G  0 lvm 
      └─pve-vm--105--disk--0 252:18   0     5G  0 lvm


Initial thoughts + Questions:
I assumed, since I installed Proxmox as LVM, that the best way to change my boot drive would be to do a new install into a mirrored ZFS pool. I guess there could be an option for copying the data onto a new drive, but this would inherit the LVM boot 'issue', which I doubt is the best practice moving forward.
  1. Is this best practice (to use mirrored ZFS for boot)?
  2. What happens to data stored in my current ZFS pool upon reinstallation?
    • Will I have to create the ZFS pool again (thus wiping all the data)? Or, will Proxmox recognise the ZFS pool and reinstate it in the new installation?
  3. Will my current LXC backups work to restore LXC data from the previous LVM storage to the new ZFS pool?
    • Assume I would have to change mount points when restoring these backups at a minimum?

Hardware + Questions:
  • Current Hardware:
    • 1x gen3 m.2 SSD (failing)
    • 3x 16TB HDDs (6bps connected via Mobo sata headers)
    • Mobo: (Intel 12400 CPU, so limited bifurcation options)
      • 1x m.2 3.0 x4 slot
      • 1x pcie 4.0 x16 slot
      • 4x 6gbps sata connectors
  • Possible Upgrade:
    1. m.2 to 5x sata expansion card + 4 new SSDs, both mirrored (one mirror for boot drive, the other for app config data) - I'm pretty sure I would need to move my HDDs to these m.2 sata connections as it wouldn't work for the boot drive (as it requires a driver for the system to read, which wouldn't work at UEFI/BIOS, though please correct me if I am wrong)
    2. Or, PCIE to 4x m.2 self-bifurcation adapter + 4 new NVME drives - this is a more expensive approach, and removes the only PCIE slot - I'm not sure if it falls under the same issue as above, RE: unusable for boot scenario?
      • If it is, then I may need to look at a PCIE to sata/HBA style card and then swap over HDD and SSD sata ports, to ensure the boot SSDs are connected via Mobo.
      • I am assuming Proxmox would be able to identify the existing HDD ZFS pool, regardless of the sata connections changing i.e. from direct mobo to via an adapter card (PCIE or m.2)?
    3. Or, a new mobo w/ 2x m.2 slots for the boot zfs mirror - makes the second mirror for my self hosted apps a bit more difficult - could move to a single drive in zfs raid0, but pretty sure this wouldn't be able to expanded on in the future with redundancy i.e. raid1?

Does anyone have experience with the above or ideas for moving forward?
Also, is there a large noticeable difference in speed for both boot and hosted apps between gen3 nvme and 6gbps sata? I've always used nvme for boot, so not experienced here.

I'm unsure of what my next steps should be. Let me know if there's any other data I can provide to give a better picture of the current situation.

Massively appreciate any support in advance,
Altorvo.
 
A couple of thoughts. If you have a separate storage device (NAS or similar) and can back up your LXCs and VMs to that, then assuming you don't have dozens and dozens of workloads to backup, its probably easier to back them up and start from scratch with a fresh install. My homelab environment is pretty small, and this is what I have done several times. A fresh install takes like 15-20 minutes if you know what you are doing, and the restore from backups takes me another 5-10 minutes (only 5 VMs total)

As far as your hardware config, I have 4 different Proxmox nodes here that I use for various purposes: My main node which runs my websites, a sandbox node for experimenting, a node for running Kubernetes and a node for hosting ansible and as a backup destination. I have configured storage 4 different ways just because of the hardware limitations of the equipment: All files on one NVME, all files on a set of mirrored NVMEs, separate boot mirrors and VM storage mirrors, and even NVMEs on a PCI bifurcation for boot and or VM storage. I think it is a waste of resources to put the boot drives on NVME if you don't have to. The system, in my experience, doesn't get any performance benefit. So, in my main system, I have a pair of small SATA SSDs (256GB) mirrored for the OS, and I have the VM storage in a separate mirror of NVME drives. To me, that gives the best performannce.

Also, I store almost no data on my main Proxmox node. I prefer to keep all my data on a separate NAS machine. My VMs have NFS shares mounted inside of them to store data or in the case of the Docker hosts, I use the Docker NFS driver to mount my Docker volumes directly on the NAS. This way, I keep the size of my VMs very small. They are all 32GB or less. Performance is perfectly fine for my needs. But, I don't run any media servers. My workloads are mainly Wordpress websites that self host, NextCloud, Vaultwarden, Paperless NGX, Home assistant, and the like. All of my workloads except Wordpress and TrueNAS run as docker containers.
 
Thank you for the super-fast reply!

separate storage device (NAS or similar)
I currently have all my backups on a separate server running TrueNas Scale (backup NAS), so no issues there. Do you know if there will be any issues when restoring the backups onto the fresh install, particularly regarding the changing of mounted storage and its file system type? I.e. from LVM to zfs.
I'm assuming it'll be fine for retrieving the LXC config, and then I'll just have to edit the XYZ.conf to the correct mount. (All my VMs are test envs, so will likely rebuild these anyway.)
I've tested the backups previously, and they have worked great, so I'm not expecting any issues there, but I have most of my services/configs saved, so it will be fairly quick to spin them back up either way.

Also, do you know whether Proxmox will identify the HDDs that are currently configured as RAIDZ-1 upon reinstall? Or will I need to wipe the disks and reformat as a new zfs pool?

NVMEs on a PCI bifurcation
Could you share what PCIE bifurcation card you have, please? I've been looking all over and would be great to know what works w/ Proxmox!
Also, really appreciate your experience - Good idea just having a mirror of SATA SSDs for boot and mirrored NVME for app/LXC/VM data.

With your above suggestion, I would likely either get a PCIE bifurcation card for 2x NVME (VM/LXC/app data), and an m.2 -> SATA connection, which would hold the 2x SATA SSDs (OS) OR get a new mobo (w/ 2x m.2 slots), and a PCIE -> SATA card for the OS SSDs.

Thank you again in advance!