Hello!
I have used MDADM for years and would continue to do so, but I heard that its not officially supported, plus I do like ZFS features, so I would really like to try to get it working well.
I have the following system configuration:
When copying multiple large 5GB test files to the HDD-based pool over SMB, the copy operation goes at ~283MB/s for the first 10GB-11GB and then drops down to ~160MB/s for the rest of the test. At the same time, I can see the IO Delay jump up from ~5% to ~45% in proxmox. Copying to the SSD-based pool maintained ~283 MB/s for the duration of the test.
I also tested using MDADM + RAID1 + LVM + EXT4 for those same two hard drives instead of ZFS (took me 24 hours to initialize that pool! ) and that performed at a constant ~230MB/s with IO Delay varying between 5% and 8%.
I have used ashift=12 when creating the HDD-based pool and have not configured ZIL/L2ARC. I tried searching if I'm missing some sort of configuration, but was not able to find something definitive. My best guess is that ZFS writes its metadata/log to the HDD (and maybe also reading something) as the copy operation is happening, slowing things down, since its a spinning disk.
I do know that generally RAID with USB enclosures has not had the best track record, but I have tested this specific one extensively and it is very stable. I am also using UAS, so it performs very well as well. Still, maybe there are some additional issues where this particular setup is not ideal for ZFS:
I do have those two NVME SSDs in a Mirror ZFS pool available if we need to setup ZIL (How would I do that? Would I have to shrink the existing pool?), but I don't really want to kill them - they are pretty good SSDs, but not enterprise-grade, so not sure if thats a good idea.
Any help with this would be appreciated!
I have used MDADM for years and would continue to do so, but I heard that its not officially supported, plus I do like ZFS features, so I would really like to try to get it working well.
I have the following system configuration:
- Proxmox VE 8.1.4
- AMD Ryzen 9 7940HS CPU
- 32 GB RAM
- 2x WD_BLACK 1TB SN770 SSDs in ZFS Mirror configuration (OS Disk, VM & LXC storage)
- Sabrent DS-SC4B 4-bay 10-Gbit USB-C (UAS) enclosure (JBOD/not raid) with 2x WD RED Pro 16TB drives in ZFS Mirror configuration (Data Storage, SMB)
- 2.5 Gbit Ethernet
Code:
~# zpool status
pool: naspool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
naspool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:00:01 with 0 errors on Sun Feb 11 00:24:02 2024
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.e8238fa6bf530001001b448b4a0de958-part3 ONLINE 0 0 0
nvme-eui.e8238fa6bf530001001b448b4a007bc3-part3 ONLINE 0 0 0
errors: No known data errors
~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 14.6T 0 disk
├─sda1 8:1 0 14.6T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 14.6T 0 disk
├─sdb1 8:17 0 14.6T 0 part
└─sdb9 8:25 0 8M 0 part
nvme1n1 259:0 0 931.5G 0 disk
├─nvme1n1p1 259:1 0 1007K 0 part
├─nvme1n1p2 259:2 0 1G 0 part
└─nvme1n1p3 259:3 0 930.5G 0 part
nvme0n1 259:4 0 931.5G 0 disk
├─nvme0n1p1 259:5 0 1007K 0 part
├─nvme0n1p2 259:6 0 1G 0 part
└─nvme0n1p3 259:7 0 930.5G 0 part
When copying multiple large 5GB test files to the HDD-based pool over SMB, the copy operation goes at ~283MB/s for the first 10GB-11GB and then drops down to ~160MB/s for the rest of the test. At the same time, I can see the IO Delay jump up from ~5% to ~45% in proxmox. Copying to the SSD-based pool maintained ~283 MB/s for the duration of the test.
I also tested using MDADM + RAID1 + LVM + EXT4 for those same two hard drives instead of ZFS (took me 24 hours to initialize that pool! ) and that performed at a constant ~230MB/s with IO Delay varying between 5% and 8%.
I have used ashift=12 when creating the HDD-based pool and have not configured ZIL/L2ARC. I tried searching if I'm missing some sort of configuration, but was not able to find something definitive. My best guess is that ZFS writes its metadata/log to the HDD (and maybe also reading something) as the copy operation is happening, slowing things down, since its a spinning disk.
I do know that generally RAID with USB enclosures has not had the best track record, but I have tested this specific one extensively and it is very stable. I am also using UAS, so it performs very well as well. Still, maybe there are some additional issues where this particular setup is not ideal for ZFS:
Code:
~# lsusb
...
Bus 002 Device 006: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge
Bus 002 Device 005: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge
~# lsusb -t
...
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 10000M
|__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/2p, 10000M
|__ Port 1: Dev 4, If 0, Class=Hub, Driver=hub/4p, 10000M
|__ Port 1: Dev 5, If 0, Class=Mass Storage, Driver=uas, 10000M
|__ Port 2: Dev 6, If 0, Class=Mass Storage, Driver=uas, 10000M
I do have those two NVME SSDs in a Mirror ZFS pool available if we need to setup ZIL (How would I do that? Would I have to shrink the existing pool?), but I don't really want to kill them - they are pretty good SSDs, but not enterprise-grade, so not sure if thats a good idea.
Any help with this would be appreciated!
Last edited: