Hi All,
What are my options/best practices for changing my Proxmox VE boot drive? I've read a few posts regarding this subject, but I'm still stuck. None have covered all of the questions I have, so hoping a couple of people will grace me with their experience.
I'm particularly interested in whether a Proxmox reinstallation will recognise the existing ZFS pool regardless of sata port, and whether the current LXC backups will work when moving to ZFS boot pool instead of LVM? (More details below)
Current Set-up:
Initial thoughts + Questions:
I assumed, since I installed Proxmox as LVM, that the best way to change my boot drive would be to do a new install into a mirrored ZFS pool. I guess there could be an option for copying the data onto a new drive, but this would inherit the LVM boot 'issue', which I doubt is the best practice moving forward.
Hardware + Questions:
Does anyone have experience with the above or ideas for moving forward?
Also, is there a large noticeable difference in speed for both boot and hosted apps between gen3 nvme and 6gbps sata? I've always used nvme for boot, so not experienced here.
I'm unsure of what my next steps should be. Let me know if there's any other data I can provide to give a better picture of the current situation.
Massively appreciate any support in advance,
Altorvo.
What are my options/best practices for changing my Proxmox VE boot drive? I've read a few posts regarding this subject, but I'm still stuck. None have covered all of the questions I have, so hoping a couple of people will grace me with their experience.
I'm particularly interested in whether a Proxmox reinstallation will recognise the existing ZFS pool regardless of sata port, and whether the current LXC backups will work when moving to ZFS boot pool instead of LVM? (More details below)
Current Set-up:
- As shown below, I currently have 1 nvme 500GB drive, which holds all PVE boot data, and I also mounted the local-lvm to most of my LXCs to easily access app config data (which is smb shared so I can access from any local device - I know not the best system, but it's what I had to hand).
- I also have a raidz1-0 ZFS pool (sda,sdb,sdc).
Code:
root@avprox:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 14.6T 0 disk
├─sda1 8:1 0 14.6T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 14.6T 0 disk
├─sdb1 8:17 0 14.6T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 14.6T 0 disk
├─sdc1 8:33 0 14.6T 0 part
└─sdc9 8:41 0 8M 0 part
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 1G 0 part /boot/efi
└─nvme0n1p3 259:3 0 464.8G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 3.4G 0 lvm
│ └─pve-data-tpool 252:4 0 337.9G 0 lvm
│ ├─pve-data 252:5 0 337.9G 1 lvm
│ ├─pve-vm--100--disk--0 252:6 0 8G 0 lvm
│ ├─pve-vm--100--disk--1 252:7 0 250G 0 lvm
│ ├─pve-vm--101--disk--0 252:8 0 9G 0 lvm
│ ├─pve-vm--103--disk--0 252:9 0 3G 0 lvm
│ ├─pve-vm--201--disk--0 252:10 0 9G 0 lvm
│ ├─pve-vm--300--disk--0 252:11 0 9G 0 lvm
│ ├─pve-vm--102--disk--0 252:12 0 5G 0 lvm
│ ├─pve-vm--200--disk--1 252:13 0 20G 0 lvm
│ ├─pve-vm--200--disk--2 252:14 0 31.1T 0 lvm
│ ├─pve-vm--200--disk--3 252:15 0 250G 0 lvm
│ ├─pve-vm--104--disk--0 252:16 0 3G 0 lvm
│ ├─pve-vm--301--disk--0 252:17 0 7G 0 lvm
│ └─pve-vm--105--disk--0 252:18 0 5G 0 lvm
└─pve-data_tdata 252:3 0 337.9G 0 lvm
└─pve-data-tpool 252:4 0 337.9G 0 lvm
├─pve-data 252:5 0 337.9G 1 lvm
├─pve-vm--100--disk--0 252:6 0 8G 0 lvm
├─pve-vm--100--disk--1 252:7 0 250G 0 lvm
├─pve-vm--101--disk--0 252:8 0 9G 0 lvm
├─pve-vm--103--disk--0 252:9 0 3G 0 lvm
├─pve-vm--201--disk--0 252:10 0 9G 0 lvm
├─pve-vm--300--disk--0 252:11 0 9G 0 lvm
├─pve-vm--102--disk--0 252:12 0 5G 0 lvm
├─pve-vm--200--disk--1 252:13 0 20G 0 lvm
├─pve-vm--200--disk--2 252:14 0 31.1T 0 lvm
├─pve-vm--200--disk--3 252:15 0 250G 0 lvm
├─pve-vm--104--disk--0 252:16 0 3G 0 lvm
├─pve-vm--301--disk--0 252:17 0 7G 0 lvm
└─pve-vm--105--disk--0 252:18 0 5G 0 lvm
Initial thoughts + Questions:
I assumed, since I installed Proxmox as LVM, that the best way to change my boot drive would be to do a new install into a mirrored ZFS pool. I guess there could be an option for copying the data onto a new drive, but this would inherit the LVM boot 'issue', which I doubt is the best practice moving forward.
- Is this best practice (to use mirrored ZFS for boot)?
- What happens to data stored in my current ZFS pool upon reinstallation?
- Will I have to create the ZFS pool again (thus wiping all the data)? Or, will Proxmox recognise the ZFS pool and reinstate it in the new installation?
- Will my current LXC backups work to restore LXC data from the previous LVM storage to the new ZFS pool?
- Assume I would have to change mount points when restoring these backups at a minimum?
Hardware + Questions:
- Current Hardware:
- 1x gen3 m.2 SSD (failing)
- 3x 16TB HDDs (6bps connected via Mobo sata headers)
- Mobo: (Intel 12400 CPU, so limited bifurcation options)
- 1x m.2 3.0 x4 slot
- 1x pcie 4.0 x16 slot
- 4x 6gbps sata connectors
- Possible Upgrade:
- m.2 to 5x sata expansion card + 4 new SSDs, both mirrored (one mirror for boot drive, the other for app config data) - I'm pretty sure I would need to move my HDDs to these m.2 sata connections as it wouldn't work for the boot drive (as it requires a driver for the system to read, which wouldn't work at UEFI/BIOS, though please correct me if I am wrong)
- Or, PCIE to 4x m.2 self-bifurcation adapter + 4 new NVME drives - this is a more expensive approach, and removes the only PCIE slot - I'm not sure if it falls under the same issue as above, RE: unusable for boot scenario?
- If it is, then I may need to look at a PCIE to sata/HBA style card and then swap over HDD and SSD sata ports, to ensure the boot SSDs are connected via Mobo.
- I am assuming Proxmox would be able to identify the existing HDD ZFS pool, regardless of the sata connections changing i.e. from direct mobo to via an adapter card (PCIE or m.2)?
- Or, a new mobo w/ 2x m.2 slots for the boot zfs mirror - makes the second mirror for my self hosted apps a bit more difficult - could move to a single drive in zfs raid0, but pretty sure this wouldn't be able to expanded on in the future with redundancy i.e. raid1?
Does anyone have experience with the above or ideas for moving forward?
Also, is there a large noticeable difference in speed for both boot and hosted apps between gen3 nvme and 6gbps sata? I've always used nvme for boot, so not experienced here.
I'm unsure of what my next steps should be. Let me know if there's any other data I can provide to give a better picture of the current situation.
Massively appreciate any support in advance,
Altorvo.