Dell R730 with SSD's and SAS drives

ilium007

Active Member
Feb 25, 2020
50
3
28
46
I have a Dell R730 with 2 x Enterprise 800GB SSD's and 6 x 1.2TB 12k SAS drives.

I was planning on setting up 2 x 800GB SSD's in a ZFS mirror, 2 x 1.2TB SAS drives in a ZFS mirror and 4 x 1.2TB SAS drives in a ZFS RAID10 array. Which mirror should I install Proxmox on - the SSD's or the 1.2TB mirror? This would leave the SSD's for VM guest HDD's. I will be running mixed loads - Windows Server, some Debian machines and some LXC's (Unifi and Wireguard).

On the RAID10 array I will have a 500GB virt-blk HDD to pass to Windows Server for file storage. The 1.2TB mirror will have storage allocation for PBS backups.
 
Last edited:
First: make sure to have direct access to the drives. No "Raid Mode" on the controller level involved, please.

During installation of PVE: create a ZFS striped mirrors (aka "Raid10") pool of all 6 (or four) SAS drives. This will create a single "rpool" of three mirrors for the OS and for the VMs.

Later, when the system is up and running, add a "Special Device"(!) from those two SSDs, also mirrored. This point is crucial; you will find guidance on how to do that somewhere...

This would be my personal approach, ymmv!
 
special_small_blocks=1M
Yes/no, I do this also - but with a lower value.

Keep in mind: a wrong (to high) a value will result in all data going into the Special Device - which is not what you want. I don't have the link available, but that effect is well documented... somewhere.
 
Thanks - I've been searching for a few hours now on an easy to understand grounding in special devices. I'll go through the Proxmox doco on the topic again.
 
So I have rebuilt PVE using the 6 x 1.2TB SAS drives in a RAID10 array.

I have added the ZFS special device to the rpool (thats all I had) ZFS pool:

Code:
root@pve01:~# zpool add rpool special mirror /dev/sdg /dev/sdh
root@pve01:~# zfs set special_small_blocks=4K rpool/data

Is this correct? Should I now be creating datasets to hold the VM HDD's or do I create them on the "local" storage (zfs rpool)?

Are my 800GB SATA SSD's now wasted space?

Code:
root@pve01:~# zpool list -v
NAME                               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                             3.98T  7.67G  3.98T        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                        1.09T  2.52G  1.08T        -         -     0%  0.22%      -    ONLINE
    scsi-35000cca02d9c48d0-part3  1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9c4af8-part3  1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-1                        1.09T  2.58G  1.08T        -         -     0%  0.23%      -    ONLINE
    scsi-35000cca02d9c4b0c-part3  1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cd3c8-part3  1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-2                        1.09T  2.54G  1.08T        -         -     0%  0.22%      -    ONLINE
    scsi-35000cca02d9cc264-part3  1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cd9c4-part3  1.09T      -      -        -         -      -      -      -    ONLINE
special                               -      -      -        -         -      -      -      -         -
  mirror-3                         744G  34.6M   744G        -         -     0%  0.00%      -    ONLINE
    sdg                            745G      -      -        -         -      -      -      -    ONLINE
    sdh                            745G      -      -        -         -      -      -      -    ONLINE
root@pve01:~#
 
Last edited:
Can anyone give me a hand with this pls? I’m at a loss as to what to do next. Before I build my VM’s I need to get the disk layout correct.
From what I’ve read the SSD’s can be partitioned to allow use as a special device mirror as well as general ZFS data storage but I’m not sure what combination would work with the special device mirror.
 
Last edited:
Before I build my VM’s I need to get the disk layout correct.
There is no one-recommendation-fits-all, sorry.

You've added that Special Dev already. You can NOT remove it anymore, afaik. So the first step would be to destroy the complete pool... and recreate it with the same three main mirrors.

Then partition sdg/sdh. For adding those partitions to the pool do NOT use these names but the corresponding one from /dev/disk/by-id/ or so. "sdg/sdh" can and will mutate earlier or later --> creating confusion and hickups.

A Special Device might be as small as 0.3% of the pool, so maybe 12 GB is enough for your 4 TB. Of course an upper headroom is recommended, to prepare for extensions of the main disks in the future. My point being: "it is small", as long as we are talking Meta-Data only.

But: with those optional "special_small_blocks" I would opt for a much larger size, some hundred GB - without a substantiated reason, just from my guts. Note that you need to trim some parameters to actually use that space and only newly written data will benefit.

See https://forum.proxmox.com/threads/zfs-metadata-special-device.129031/post-699290 for other hints and a cool trick I learned there just yesterday :-)

You may also add an SLOG as a mirror. This helps only with sync-writes, but hey, as long as your SSDs are Enterprise class...
For a 1 GBit/s single ingress line its size could be just 1875 MB (3 times 5 seconds 125 MB/s), for 10 GBit/s --> 18.75 GB is the absolute maximum. A larger space does NOT help. Never.

The rest of space is up to you, as is the complete construct ;-)

Just my 2 €¢...
 
Thanks - this helps a lot. Because I had installed PVE on the RAID10 as per the first response I will need to rebuild the machine because, as you said, I added the whole SSD devices as the special device mirror.

Re. this.
In my pools with special device, I also have a dataset named SSD with a special_small_blocks value of 0, so that everything is going to SSD.
Does this mean I could create a new dataset on my spinning rust RAID10 (the one I allocated the whole 800GB SSD's to as special device) and set special_small_blocks to 0 on the dataset named rpool/SSD and somehow have all data written top the dataset go to the SSD mirror?

I created a new dataset:
Code:
root@pve01:~# zfs create rpool/SSD

Set the special_small_blocks attribute:
Code:
root@pve01:~# zfs get special_small_blocks
NAME                          PROPERTY              VALUE                 SOURCE
rpool                         special_small_blocks  4K                    local
rpool/ROOT                    special_small_blocks  4K                    inherited from rpool
rpool/ROOT/pve-1              special_small_blocks  4K                    inherited from rpool
rpool/SSD                     special_small_blocks  0                     local
rpool/data                    special_small_blocks  4K                    inherited from rpool
rpool/data/subvol-101-disk-0  special_small_blocks  4K                    inherited from rpool
rpool/data/vm-100-disk-0      special_small_blocks  -                     -
rpool/data/vm-100-disk-1      special_small_blocks  -                     -
rpool/data/vm-102-disk-0      special_small_blocks  -                     -
rpool/data/vm-102-disk-1      special_small_blocks  -                     -
rpool/data/vm-102-disk-2      special_small_blocks  -                     -
rpool/data/vm-102-disk-3      special_small_blocks  -                     -
rpool/var-lib-vz              special_small_blocks  4K                    inherited from rpool
root@pve01:~#

I wrote a 1gb file to /rpool/SSD but saw no difference in space used:

Code:
root@pve01:~# zpool list -v
NAME                                        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                      3.98T   121G  3.87T        -         -     0%     2%  1.00x    ONLINE  -
  mirror-0                                 1.09T  39.8G  1.05T        -         -     0%  3.58%      -    ONLINE
    scsi-35000cca02d9c48d0-part3           1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9c4af8-part3           1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-1                                 1.09T  39.9G  1.05T        -         -     0%  3.59%      -    ONLINE
    scsi-35000cca02d9c4b0c-part3           1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cc264-part3           1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-2                                 1.09T  40.0G  1.05T        -         -     0%  3.59%      -    ONLINE
    scsi-35000cca02d9cd9c4-part3           1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cd3c8-part3           1.09T      -      -        -         -      -      -      -    ONLINE
special                                        -      -      -        -         -      -      -      -         -
  mirror-3                                  744G   846M   743G        -         -     0%  0.11%      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV72300620800CGN   745G      -      -        -         -      -      -      -    ONLINE
    sdh                                     745G      -      -        -         -      -      -      -    ONLINE
root@pve01:~#
root@pve01:~#
root@pve01:~#
root@pve01:~#
root@pve01:~#
root@pve01:~# dd if=/dev/random of=/rpool/SSD/test1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.25514 s, 204 MB/s
root@pve01:~#
root@pve01:~#
root@pve01:~#
root@pve01:~#
root@pve01:~# zpool list -v
NAME                                        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                      3.98T   121G  3.87T        -         -     0%     2%  1.00x    ONLINE  -
  mirror-0                                 1.09T  39.9G  1.05T        -         -     0%  3.59%      -    ONLINE
    scsi-35000cca02d9c48d0-part3           1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9c4af8-part3           1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-1                                 1.09T  40.1G  1.05T        -         -     0%  3.60%      -    ONLINE
    scsi-35000cca02d9c4b0c-part3           1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cc264-part3           1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-2                                 1.09T  40.1G  1.05T        -         -     0%  3.60%      -    ONLINE
    scsi-35000cca02d9cd9c4-part3           1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cd3c8-part3           1.09T      -      -        -         -      -      -      -    ONLINE
special                                        -      -      -        -         -      -      -      -         -
  mirror-3                                  744G   847M   743G        -         -     0%  0.11%      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV72300620800CGN   745G      -      -        -         -      -      -      -    ONLINE
    sdh                                     745G      -      -        -         -      -      -      -    ONLINE
root@pve01:~#

From that post you linked I assume @LnxBil has a way to send data to the SSD dataset without it hitting the spinning rust drives.
 
Last edited:
So I have rebuilt the PVE server, I created a ZFS mirror on the SSD's during install that was 128GB for rpool. Post install I created the RAID10 array (pool0) on the 6 SAS drives and 2 additional partitions on the SSD's and added them as a special device on the pool0 RAID10 array.

Code:
root@pve01:~# zpool list -v
NAME                                              SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0                                            3.86T  1.02M  3.86T        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                                       1.09T   280K  1.09T        -         -     0%  0.00%      -    ONLINE
    scsi-35000cca02d9c48d0                       1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9c4af8                       1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-1                                       1.09T   288K  1.09T        -         -     0%  0.00%      -    ONLINE
    scsi-35000cca02d9c4b0c                       1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cc264                       1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-2                                       1.09T   272K  1.09T        -         -     0%  0.00%      -    ONLINE
    scsi-35000cca02d9cd3c8                       1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cd9c4                       1.09T      -      -        -         -      -      -      -    ONLINE
special                                              -      -      -        -         -      -      -      -         -
  mirror-3                                        616G   204K   616G        -         -     0%  0.00%      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV72300620800CGN-part4   617G      -      -        -         -      -      -      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV7231045E800CGN-part4   617G      -      -        -         -      -      -      -    ONLINE
rpool                                             126G  1.42G   125G        -         -     0%     1%  1.00x    ONLINE  -
  mirror-0                                        126G  1.42G   125G        -         -     0%  1.12%      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV72300620800CGN-part3   127G      -      -        -         -      -      -      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV7231045E800CGN-part3   127G      -      -        -         -      -      -      -    ONLINE
root@pve01:~#

root@pve01:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
pool0              828K  3.16T    96K  /pool0
pool0/SSD           96K  3.16T    96K  /pool0/SSD
rpool             1.42G   121G   104K  /rpool
rpool/ROOT        1.42G   121G    96K  /rpool/ROOT
rpool/ROOT/pve-1  1.42G   121G  1.42G  /
rpool/data          96K   121G    96K  /rpool/data
rpool/var-lib-vz    96K   121G    96K  /var/lib/vz
root@pve01:~#

root@pve01:~# zfs set special_small_blocks=4K pool0
root@pve01:~#
root@pve01:~#
root@pve01:~# zfs get special_small_blocks
NAME              PROPERTY              VALUE                 SOURCE
pool0             special_small_blocks  4K                    local
pool0/SSD         special_small_blocks  0                     local
rpool             special_small_blocks  0                     default
rpool/ROOT        special_small_blocks  0                     default
rpool/ROOT/pve-1  special_small_blocks  0                     default
rpool/data        special_small_blocks  0                     default
rpool/var-lib-vz  special_small_blocks  0                     default
root@pve01:~#

I now just need to understand how to force all data (that I choose ie. O/S disks for VM's) onto the SSD's using /pool0/SSD and special_small_blocks=0
 
Last edited:
At this point I'm just talking to myself but I have ended up laying out like this:

Code:
root@pve01:~#
root@pve01:~#
root@pve01:~# zpool list -v
NAME                                              SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0                                             492G   516K   492G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                                        492G   516K   492G        -         -     0%  0.00%      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV72300620800CGN-part2   495G      -      -        -         -      -      -      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV7231045E800CGN-part2   495G      -      -        -         -      -      -      -    ONLINE
rpool                                            3.50T  1.59G  3.50T        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                                       1.09T   522M  1.09T        -         -     0%  0.04%      -    ONLINE
    scsi-35000cca02d9c48d0-part3                 1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9c4af8-part3                 1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-1                                       1.09T   560M  1.09T        -         -     0%  0.04%      -    ONLINE
    scsi-35000cca02d9c4b0c-part3                 1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cc264-part3                 1.09T      -      -        -         -      -      -      -    ONLINE
  mirror-2                                       1.09T   548M  1.09T        -         -     0%  0.04%      -    ONLINE
    scsi-35000cca02d9cd3c8-part3                 1.09T      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca02d9cd9c4-part3                 1.09T      -      -        -         -      -      -      -    ONLINE
special                                              -      -      -        -         -      -      -      -         -
  mirror-3                                        248G  2.83M   248G        -         -     0%  0.00%      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV72300620800CGN-part1   250G      -      -        -         -      -      -      -    ONLINE
    ata-SSDSC2BB800G7R_PHDV7231045E800CGN-part1   250G      -      -        -         -      -      -      -    ONLINE
root@pve01:~#
root@pve01:~#
root@pve01:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
pool0              516K   477G    96K  /pool0
pool0/data          96K   477G    96K  /pool0/data
rpool             2.52G  3.15T   104K  /rpool
rpool/ROOT        2.52G  3.15T    96K  /rpool/ROOT
rpool/ROOT/pve-1  2.52G  3.15T  2.52G  /
rpool/data          96K  3.15T    96K  /rpool/data
rpool/var-lib-vz    96K  3.15T    96K  /var/lib/vz
root@pve01:~#
root@pve01:~#
root@pve01:~#
root@pve01:~# zfs get special_small_blocks
NAME              PROPERTY              VALUE                 SOURCE
pool0             special_small_blocks  0                     default
pool0/data        special_small_blocks  0                     default
rpool             special_small_blocks  4K                    local
rpool/ROOT        special_small_blocks  4K                    inherited from rpool
rpool/ROOT/pve-1  special_small_blocks  4K                    inherited from rpool
rpool/data        special_small_blocks  4K                    inherited from rpool
rpool/var-lib-vz  special_small_blocks  4K                    inherited from rpool
root@pve01:~#
root@pve01:~#
root@pve01:~#

6 x 1.2TB SAS spinning drives in RAID10 with rpool
2 x 800GB SSD's partitioned:
1 x 250GB partition for special device
1 x 500GB partition for ZFS pool0 to hold VM O/S disks
 
  • Like
Reactions: UdoB and _gabriel

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!