using zfs for pbs datastore

ioo

Renowned Member
Oct 1, 2011
29
0
66
Hi!

Let me first say that Proxmox Backup Server works quite wonderfully already (and not because i use is so well but rather you created it so well :) Since i intend to set up new set of PVE + PBS i would like to ask confirmation about using zfs for pbs datastore (PBS 2.4 and kernel 6.2). I have 9 phyisical hdd disks like this one for datastore (operating system root is on mdadm raid i.e. separetly)

# smartctl -a /dev/sdd

=== START OF INFORMATION SECTION ===
Model Family: Seagate Enterprise Capacity 3.5 HDD
Device Model: ST6000NM0024-1HT17Z
Serial Number: Z4D4N9YN
LU WWN Device Id: 5 000c50 092f253ba
Firmware Version: SN05
User Capacity: 6,001,175,126,016 bytes [6.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-3 (minor revision not indicated)
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Thu Mar 30 11:35:59 2023 EEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

for zpool and intend to have raid2z created like (i think ashift 12 i.e. block size 4096 bytes is appropriate)

# zpool create -o ashift=12 zp_pbs raidz2 /dev/disk/by-id/ata-ST6000NM0024-1HT17Z_Z4D4N9YN ...

What i wonder is what else could be useful to so to say configure for zfs

1. i have plenty of room already so i intent not to turn on zfs compression (and pbs does its own compression already)
2. should i rather have specific dataset made for pbs datastore? (st i will have dataset 'zp_pbs' configured with mountpoint=none and dataset say 'zp_pbs/datastore_1st' mounted under /srv/pbs/datastore_1st'
3. should i use acltype=posixacl, xattr=sa?
4. should i use relatime?
5. maybe you could advice me something about this zfs config department i dont know to ask :)

I don't have a need to so to say over-configure zfs, i rather say simple about it but also configure what is useful. I looked into https://openzfs.github.io/openzfs-docs/Getting Started/Debian/Debian Bullseye Root on ZFS.html and from there i mostly got idea what may be maybe relavant (although i intend not to have root-on-zfs system)

zpool create \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl -O xattr=sa -O dnodesize=auto \
-O compression=lz4 \
-O normalization=formD \
-O relatime=on \
-O canmount=off -O mountpoint=/ -R /mnt \
rpool ${DISK}-part4


I would be thankful if you could make suggestions about these options.

Best regards,


Imre
 
Generally we do not recommend using RAIDZn for the PBS datastore, particularly with HDDs. For optimal performance we recommend using (enterprise-grade) SSDs that are set up as mirrored vdevs (comparable to RAID10).

Since PBS performance is strongly related to good random IO performance, RAIDZn with HDDs is a setup that is far from optimal. Since RAIDZn only delivers IOPS performance of single(!) disk and HDDs in general have bad random IO performance this can lead to serious problems further down the road.

When using HDDs, we strongly recommend using at least a so-called ZFS special device that stores the metadata of the pool in order to speed up common operations such as garbage collection. Please note that this special device is a point of failure for your zpool, so it would need at least the same redundancy as the zpool itself.

xattr=sa and relatime should improve performance, and you can turn it on. But I would suggest thinking about your disk layout again before going into such optimizations. I cannot say anything about acltype=posixacl - so maybe someone else can chime in there.

You can read about this in our PBS documentation as well [1]

[1] https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements
 
Last edited:
  • Like
Reactions: omarkb93 and Neobin
Hi!

Sorry for replying back so late but i wanted to comment from my side how this thing proceeded. And i went as you suggested with ssd disks and in principle i am really happy with outcome, althouth at least at the moment i use ordinary raid2z

Code:
# zpool status -t
  pool: zp_pbs
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
config:

    NAME                                  STATE     READ WRITE CKSUM
    zp_pbs                                DEGRADED     0     0     0
      raidz2-0                            DEGRADED     0     0     0
        ata-CT4000MX500SSD1_2246E686FDFF  ONLINE       0     0     0  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FE07  ONLINE       0     0     0  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FE48  ONLINE       0     0     0  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FE5D  ONLINE       0     0     0  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FE5E  ONLINE       0     0     0  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FF7D  ONLINE       0     0     0  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FF85  ONLINE       0     0     0  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FF8B  FAULTED     10     0     0  too many errors  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FFBD  ONLINE       0     0     0  (trim unsupported)
        ata-CT4000MX500SSD1_2246E686FFC3  ONLINE       8     0     0  (trim unsupported)

errors: No known data errors

Maybe i should have had more expensive disks, for now i chose

Code:
=== START OF INFORMATION SECTION ===
Device Model:     CT4000MX500SSD1
Serial Number:    2246E686FE07
LU WWN Device Id: 5 00a075 1e686fe07
Firmware Version: M3CR045
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
TRIM Command:     Available
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed May 10 20:13:15 2023 EEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

In general in my experience they are good disks although in this particular case one has something going on, i need to look into this. I dont take up much other topics in this thread but just side note that in my case those disks are behind cp400i controller working in some non-intrusive mode i.e. smartctl -a /dev/sdx shows authentic disk data etc, computer is old Fujitsu RX2540M2R3. And result is that trimming is not possible.

Code:
# hdparm -I /dev/disk/by-id/ata-CT4000MX500SSD1_2246E686FF85 | grep -i trim
       *    Data Set Management TRIM supported (limit 8 blocks)

My understanding is if ssd disk would have been like this then trimming would work

Code:
# hdparm -I /dev/sda | grep -i trim
       *    Data Set Management TRIM supported (limit 8 blocks)
       *    Deterministic read ZEROs after TRIM

Also i went with ashift 13, relatime and posixacl, and i was surprised to see from 'mount' command output that relatime is debian default anyways for ext4, so i was sceptical but actually used it for some time

Code:
# mount | grep ext4
/dev/mapper/ubuntu--vg-root_pbs on / type ext4 (rw,relatime,errors=remount-ro)

And ssd good performance presents itself i my case that 23 TB of data, having around 100 groups and 2000 snapshots backup up goes thru verify in 6 hours (and read speed is around 600 MBait/s) - i my experience verify is most io resource consuming. Backup-restore-sync-garbage-collect is also good. Well in general this specific setup is kind of prototype to convince myself that techology works well in my usecase, maybe need to switch over to fresher and enterprise grade metal.


Best regards,

Imre
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!