Advice on disk configuration

DarkCorner

New Member
Mar 2, 2024
9
0
1
I have already opened a post for server migration, but in case this is not possible I will already open this one for disk configuration in the case of a new installation.

For my home lab I bought an old HP DL380 G8.
It has 25x300GB SAS HDDs and 3xNVMEs: 2x500GB connected to the PCI connector and 1x250GB connected to the internal USB port.
I have both the internal SD connector and the 4 external USB ports free.
The memory is currently 64GB and there are 6 NICs connected to an HP switch on 1GB ports.

Use is for laboratory testing; I want to learn how to use it in case I need to use Proxmox for work. So the setup, while it might be overkill for occasional use, I would like it to be able to simulate a production environment.

How do you recommend configuring the disks?

In the current configuration, which I didn't create, the first two disks are mirrored for Debian and Proxmox partitions while the boot partition is on an external USB stick.
I was told that the reason for its presence is the inability to boot on the two disks.

This is what it looks like with the zpool iostat and status commands.
Bash:
# zpool iostat -v
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
hvm1-pool1                                     329G  5.40T      1     17  12.8K  88.2K
  raidz2                                       114G  1.79T      0      5  3.89K  28.9K
    scsi-35000c5006bef66f3                        -      -      0      0    600  4.15K
    scsi-35000c5008e7b44f7                        -      -      0      0    550  4.24K
    scsi-35000c5006ba23383                        -      -      0      0    557  4.20K
    scsi-35000c50071c6028f                        -      -      0      0    566  4.12K
    scsi-35000c5006ba107f3                        -      -      0      0    568  4.08K
    scsi-35000c50067fbd733                        -      -      0      0    559  4.06K
    scsi-35000cca016a855c8                        -      -      0      0    584  4.10K
  raidz2                                       111G  1.80T      0      5  3.63K  28.9K
    scsi-35000c50088edfcbb                        -      -      0      0    548  4.22K
    scsi-35000c5006c13e06b                        -      -      0      0    500  4.20K
    scsi-35000cca0165e17c0                        -      -      0      0    509  4.07K
    scsi-35000c5006c3d029f                        -      -      0      0    532  4.12K
    scsi-35000c5006c3db52b                        -      -      0      0    573  4.02K
    scsi-35000cca0169fb1c4                        -      -      0      0    511  4.08K
    scsi-35000cca016a8703c                        -      -      0      0    545  4.15K
  raidz2                                       104G  1.80T      0      5  4.35K  29.3K
    scsi-35000c5008e246df7                        -      -      0      0    655  4.10K
    scsi-35000c50088f465f3                        -      -      0      0    628  4.17K
    scsi-35000c5007f2d287f                        -      -      0      0    603  4.20K
    scsi-350000395a811c938                        -      -      0      0    669  4.22K
    scsi-35000c50067fbab2b                        -      -      0      0    582  4.25K
    scsi-35000cca03c4198f8                        -      -      0      0    639  4.23K
    scsi-35000cca016a76e70                        -      -      0      0    673  4.16K
logs                                              -      -      -      -      -      -
  mirror                                      2.93M   957M      0      0    986  1.06K
    nvme-Sabrent_9128071317F600103604-part1       -      -      0      0    459    543
    nvme-Sabrent_9128071317F600103615-part1       -      -      0      0    527    543
cache                                             -      -      -      -      -      -
  nvme-Sabrent_9128071317F600103604-part3     84.2G   216G      1      0  27.7K    121
  nvme-Sabrent_9128071317F600103604-part4     84.7G  90.3G      1      0  27.7K    170
  nvme-Sabrent_9128071317F600103615-part3     85.9G   214G      1      0  28.3K    211
  usb-Realtek_RTL9210B_NVME_012345679167-0:0  86.6G   146G      1      0  28.3K    214
--------------------------------------------  -----  -----  -----  -----  -----  -----
hvm1-pool2                                    24.9G   253G      0      0  2.66K  6.35K
  mirror                                      24.9G   253G      0      0  2.13K  5.28K
    scsi-35000cca06e09aae0                        -      -      0      0  1.28K  2.64K
    scsi-35000cca06e09b120                        -      -      0      0    874  2.64K
logs                                              -      -      -      -      -      -
  mirror                                       748K   959M      0      0    544  1.07K
    nvme-Sabrent_9128071317F600103604-part2       -      -      0      0    272    549
    nvme-Sabrent_9128071317F600103615-part2       -      -      0      0    272    549
cache                                             -      -      -      -      -      -
  nvme-Sabrent_9128071317F600103615-part4     16.0G   159G      0      0  5.38K     62
--------------------------------------------  -----  -----  -----  -----  -----  -----
root@hvm1:~# zpool status
  pool: hvm1-pool1
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:

        NAME                                          STATE     READ WRITE CKSUM
        hvm1-pool1                                    ONLINE       0     0     0
          raidz2-0                                    ONLINE       0     0     0
            scsi-35000c5006bef66f3                    ONLINE       0     0     0
            scsi-35000c5008e7b44f7                    ONLINE       0     0     0
            scsi-35000c5006ba23383                    ONLINE       0     0     0
            scsi-35000c50071c6028f                    ONLINE       0     0     0
            scsi-35000c5006ba107f3                    ONLINE       0     0     0
            scsi-35000c50067fbd733                    ONLINE       0     0     0
            scsi-35000cca016a855c8                    ONLINE       0     0     0
          raidz2-1                                    ONLINE       0     0     0
            scsi-35000c50088edfcbb                    ONLINE       0     0     0
            scsi-35000c5006c13e06b                    ONLINE       0     0     0
            scsi-35000cca0165e17c0                    ONLINE       0     0     0
            scsi-35000c5006c3d029f                    ONLINE       0     0     0
            scsi-35000c5006c3db52b                    ONLINE       0     0     0
            scsi-35000cca0169fb1c4                    ONLINE       0     0     0
            scsi-35000cca016a8703c                    ONLINE       0     0     0
          raidz2-2                                    ONLINE       0     0     0
            scsi-35000c5008e246df7                    ONLINE       0     0     0
            scsi-35000c50088f465f3                    ONLINE       0     0     0
            scsi-35000c5007f2d287f                    ONLINE       0     0     0
            scsi-350000395a811c938                    ONLINE       0     0     0
            scsi-35000c50067fbab2b                    ONLINE       0     0     0
            scsi-35000cca03c4198f8                    ONLINE       0     0     0
            scsi-35000cca016a76e70                    ONLINE       0     0     0
        logs
          mirror-3                                    ONLINE       0     0     0
            nvme-Sabrent_9128071317F600103604-part1   ONLINE       0     0     0
            nvme-Sabrent_9128071317F600103615-part1   ONLINE       0     0     0
        cache
          nvme-Sabrent_9128071317F600103604-part3     ONLINE       0     0     0
          nvme-Sabrent_9128071317F600103604-part4     ONLINE       0     0     0
          nvme-Sabrent_9128071317F600103615-part3     ONLINE       0     0     0
          usb-Realtek_RTL9210B_NVME_012345679167-0:0  ONLINE       0     0     0

errors: No known data errors

  pool: hvm1-pool2
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:

        NAME                                         STATE     READ WRITE CKSUM
        hvm1-pool2                                   ONLINE       0     0     0
          mirror-0                                   ONLINE       0     0     0
            scsi-35000cca06e09aae0                   ONLINE       0     0     0
            scsi-35000cca06e09b120                   ONLINE       0     0     0
        logs
          mirror-1                                   ONLINE       0     0     0
            nvme-Sabrent_9128071317F600103604-part2  ONLINE       0     0     0
            nvme-Sabrent_9128071317F600103615-part2  ONLINE       0     0     0
        cache
          nvme-Sabrent_9128071317F600103615-part4    ONLINE       0     0     0

errors: No known data errors
 
I have not seen your Server migration post. So I will leave my feedback here regarding the Storage.
If you truly want to simulate a production level environment, I would recommend adding at least 2 more nodes and distribute the SAS drives you have. On a single node setup, you will miss out practising/learning cluster related features such as VM replication, HA, cluster management etc.

The other 2 nodes does not need to be robust at all. You can also add a Proxmox Backup Server as the 3rd node so you can learn how to leverage a PBS in a production grade environment. Setting up a Proxmox cluster is not hard, as many will agree. It is when things go wrong, how quickly you are able to deal with it by finding a root cause and a solution, that your knowledge come into play.

As for the Storage configuration you currently have, it is adequate for a single node, except the USB boot. You may want to figure out how to boot from a nvme so you do not have to use external usb. It is not an ideal situation in an enterprise environment to rely on USBs.

Also without knowing how much data you will put on this single node, with 64GB ram you will potentially face performance issue because of all the ZFS drives you have configured. The more and more data you put on it, your storage performance will suffer more. Again, depends on how much data and resources you are going to allocate to VM(s).

I would focus on booting the Proxmox from a NVMe setup and have a small vdev of 7 raidz2 drives on this node. That leaves you with the ability to add 2 more nodes with enough drives to go around. Your current situation may not allow addition of 2 nodes but at least gives you some idea.
 
@wahmed suggested, its all dependent on what you intend for it.

On a single host, your configuration has a number of non-sane elements; rather then trying to "fix" it, I'd say use your nvmes as their own pool (either boot or just as a fastpool) and deploy the rest of the drives in a striped mirror. and for god's sake, DONT USE USB DRIVES IN YOUR ZPOOL. you dont need cache, or slogs, they wont benefit you.
 
Thanks for your reply.
Forgive me, but I'm new to Proxmox, I only know the basics.

To create a cluster I can only use an older server (G7instead of G8) with a lower number of disks (8instead of 25).
I was thinking of using it for Promox Backup Server; I think it's already installed for PBS (I haven't turned it on yet after buying it).

I found the use of cache and logs recommended on many pages related to ZFS, why shouldn't it be useful here?

I wanted to mount ZFS with RAID-Z1/2 for greater security since the disks are also refurbished.
However, if I had bought a new server I would have always preferred ZFS to the hardware RAID of the controller. Or am I wrong?

From the limited documentation that accompanied this server, Debian was installed with the /boot partition on the USB stick because booting on disks never worked.
I think I understood that with the G8 at boot the two NVMEs mounted on a PCI card were not recognized while two SAS disks could be partitioned, but it was the /boot partition that didn't work.
However, I can try to do a new installation.

The use of the server is for testing environment only; I will create several VMs to simulate various types of networks with VLANs.
Since the VMs are not operational, they may have little memory, however I can always purchase more.
 
Unfortunately I have to confirm that with this server it is not possible to boot from two disks.
When restarting, the Non-System disk or disk error appears.
It is not even possible to create a mirror from the HPE panel.
Even installing PVE on the NVME connected to the internal USB displays the Non-System disk or disk error.
I also tried using just one HDD, but I can't boot even this way.

Considering that it is a test environment, unfortunately I am also forced to boot from a USB stick.
 
Last edited:
I found the use of cache and logs recommended on many pages related to ZFS, why shouldn't it be useful here?
It would be useful to actually cite what you read. both of those features exist for specific use cases which arent likely to apply to you. external slog devices are meant to provide a buffer when the overall write IOPs pressure is greater than the underlying vdevs are able to process them- in your case, in a striped mirror you'll have up to 12 vdevs available to process; you arently likely to produce enough of it although if you're really wondering, you can start accounting to what your cluster is meant to provide services for. As for l2arc devices, this may help you understand: https://www.truenas.com/community/threads/at-what-point-does-l2arc-make-sense.17373/

In my experience, l2arc doesnt provide any actual use for a home lab. repurposing your nvmes as a fast pool device would make infinitely more benefit as a place to park your databases, as an example.

I wanted to mount ZFS with RAID-Z1/2 for greater security since the disks are also refurbished.
Your line of thinking is correct- but it would be just as correct to want fault tolerance for any storage device. they all fail. parity raid isnt the only way to get there, however; mirrors have the benefit of providing more vdevs and being much faster to resilver (rebuild.)
Unfortunately I have to confirm that with this server it is not possible to boot from two disks.
Why is that? you can either create a raid1 in SSA, or use the proxmox installer to do a zfs mirror at installation- just make sure your server is set to UEFI boot mode.

When restarting, the Non-System disk or disk error appears.
check your boot mode and order in bios. pressing F9 during boot will show you whats available.
 
I also use TrueNAS SCALE; I hadn't seen that page, but I've read others on TrueNAS and ZFS in general.

I don't use DB; I use NAS for File System and backup.
On Proxmox there are only VMs; being a test environment I don't have any DBs here, but even if there were active VMs in production (and I didn't foresee them), the DBs would be in the VM.

Maybe I'm wrong (and that's why I'm asking for advice), but I thought L2ARC was to free up as much RAM as possible to be left to the VMs and for asynchronous writes.

As for the Boot disk, I've tried everything, but there is obviously a problem with the server. Being EOL I cannot access support and the Community solutions have not helped.
So I first installed Debian by configuring a SAS HDD with the 3 partitions provided for PVE and the sole boot partition on the USB stick.
This is the only way to start PVE, at least at the moment.
F9 reports the correct configuration.
 
I don't use DB; I use NAS for File System and backup.
I think this may have been the first thing to address alltogether- why do you want pve? what will be the payload?

Maybe I'm wrong (and that's why I'm asking for advice), but I thought L2ARC was to free up as much RAM as possible to be left to the VMs and for asynchronous writes.
l2arc doesnt do that. you end up consuming ram for less effect.

As for the Boot disk, I've tried everything
cant speak to that. I had proxmox on G8 equipment without issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!