PVE new setup advice for drives of different sizes + cache

Ender519

Member
Jun 1, 2023
9
0
6
USA
I am trying to compare Proxmox to Unraid on my setup, which is a Dell Precision Tower with 2 x 18TB SATA, 1 x 14TB SATA, 1 x 1TB NVMe, and 1 x 512GB NVMe

What I had done with Unraid was an array of the 3 SATA drives with cache using the 1TB NVMe, and then I am using the 512GB NVMe as a passthrough for a Windows VM. I will be running around 20 VM's or so. With Unraid it was JBOD more or less. But I've been using Proxmox for a lot longer than Unraid and I prefer it, so I'm trying to make this work.

I can't find a good guide on how to set this up. Near as I can tell, it sounds like ZFS is the way to go about this but, that expects drives of same size, so I'd lose 8TB if I make a ZFS pool of the 3 drives. Another direction I can do is that I figured out how to make 2 drives with XFS or EXT4, one LVM is 1 x 14TB and the other is 36TB (2 x 18TB). But from here, it's really not evident how to use the 1TB NVMe as cache for either or both things. I would really prefer it to cache for all the SATA drives.

I don't mind if I have one 14TB disk and one 36TB disk if it comes to it, if I can just get cache working. In a perfect world, I would end up with 50TB + cache but...

Anyway, can someone please help me do this the right way? It's much appreciated!!
 
I've gotten as far as this with tinkering...
1 ZFS drive with 2 x 18GB, sdb and sdc
1 ZFS drive with 1 x 14TB, sda

All good, except I can only add the 1TB NVMe (nvme1n1) as cache to one of those with command zpool add rpool cache /dev/nvme1n1.. and I can't find a guide on how to split the NVMe into two and then assign those drives as cache to the two ZFS drives. If I can do that, I think I'm solved.


Code:
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0  12.7T  0 disk
├─sda1        8:1    0  12.7T  0 part
└─sda9        8:9    0    64M  0 part
sdb           8:16   0  16.4T  0 disk
├─sdb1        8:17   0  1007K  0 part
├─sdb2        8:18   0     1G  0 part
└─sdb3        8:19   0  16.4T  0 part
sdc           8:32   0  16.4T  0 disk
├─sdc1        8:33   0  1007K  0 part
├─sdc2        8:34   0     1G  0 part
└─sdc3        8:35   0  16.4T  0 part
sdd           8:48   1   7.2G  0 disk
├─sdd1        8:49   1   224K  0 part
├─sdd2        8:50   1   2.8M  0 part
├─sdd3        8:51   1     1G  0 part
└─sdd4        8:52   1   300K  0 part
nvme1n1     259:0    0 953.9G  0 disk
nvme0n1     259:1    0 476.9G  0 disk
├─nvme0n1p1 259:2    0   100M  0 part
├─nvme0n1p2 259:3    0    16M  0 part
├─nvme0n1p3 259:4    0 476.2G  0 part
└─nvme0n1p4 259:5    0   593M  0 part

proxmox:~# zpool status
  pool: rpool
 state: ONLINE
config:

        NAME                                       STATE     READ WRITE CKSUM
        rpool                                      ONLINE       0     0     0
          ata-ST18000NM000J-2TV103_ZR52F41W-part3  ONLINE       0     0     0
          ata-ST18000NM000J-2TV103_ZR50W7F7-part3  ONLINE       0     0     0

errors: No known data errors

  pool: vmpool
 state: ONLINE
config:

        NAME                                 STATE     READ WRITE CKSUM
        vmpool                               ONLINE       0     0     0
          ata-ST14000NM010G-2RN102_ZL2EPJT4  ONLINE       0     0     0

errors: No known data errors
 
And I've finally figured it out. After spending a couple days banging my head on the wall. I'm leaving this here so other many benefit.

So, the trick was to format the 1TB NVMe into 2 partitions using fdisk as follows:

Code:
Command (m for help): g
Created a new GPT disklabel (GUID: F25774F6-9023-D24D-AE28-B6DC7F88E809).

Command (m for help): n
Partition number (1-128, default 1):
First sector (2048-2000409230, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-2000409230, default 2000409230): 1000204615

Created a new partition 1 of type 'Linux filesystem' and of size 476.9 GiB.
Partition #1 contains a xfs signature.

Do you want to remove the signature? [Y]es/[N]o: y

The signature will be removed by a write command.

Command (m for help): n
Partition number (2-128, default 2):
First sector (1000204616-2000409230, default 1000206336):    ### I halved the number of sectors to make partition about 50% size
Last sector, +/-sectors or +/-size{K,M,G,T,P} (1000206336-2000409230, default 2000409230):

Created a new partition 2 of type 'Linux filesystem' and of size 476.9 GiB.
Partition #2 contains a xfs signature.

Do you want to remove the signature? [Y]es/[N]o: y

The signature will be removed by a write command.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Once that was done, looking at lsblk, we have two partitions now of equal size
Code:
proxmox:~# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
...

nvme1n1     259:0    0 953.9G  0 disk
├─nvme1n1p1 259:8    0 476.9G  0 part
└─nvme1n1p2 259:9    0 476.9G  0 part
...

Now I added one partition each to each ZFS pool.. and it appears we are now successful. So, I think this the best I can do unless someone has better ideas :).

Code:
zpool add rpool cache /dev/nvme1n1p1
zpool add vmpool cache /dev/nvme1n1p2

proxmox:~# zpool status
  pool: rpool
 state: ONLINE
config:

        NAME                                       STATE     READ WRITE CKSUM
        rpool                                      ONLINE       0     0     0
          ata-ST18000NM000J-2TV103_ZR52F41W-part3  ONLINE       0     0     0
          ata-ST18000NM000J-2TV103_ZR50W7F7-part3  ONLINE       0     0     0
        cache
          nvme1n1p1                                ONLINE       0     0     0

errors: No known data errors

  pool: vmpool
 state: ONLINE
config:

        NAME                                 STATE     READ WRITE CKSUM
        vmpool                               ONLINE       0     0     0
          ata-ST14000NM010G-2RN102_ZL2EPJT4  ONLINE       0     0     0
        cache
          nvme1n1p2                          ONLINE       0     0     0

errors: No known data errors