Hey Guys,
new year, new challanges.
This year i picked filesystems and after 3 weeks reading documentation, tutorials and best practices I'm just running in a deadend.
I picked an existing power optimized server with 16 GB RAM, 1x 250GB nvme SSD (Proxmox+VM), 1x 256GB SSD SATA (Cache/Log), 2x 2TB HDD SATA (Raid1: Data), 3x 3TB HDD SATA (Raid1+Spare: Data).
The whole system isn't designed to be fast and the VMs contains only internal DNS, NAS and small services.
For my tests i made setups wie ZFS on both SSD and 2TB HDD as Raid1. The 3TB HDD are ext4. I'ved tested compress=off/lz4, dedup=on/off and serveral Cache/Log constellations on 256GB SSD.
In my second round I made setups with btrfs on the nvme SSD and luks+btrfs on 2TB HDD as RAID1.
Regardless what I'm doing the maximum transferrate (write) is between 40-60MB on ZFS. With btrfs I got 130-160MB. Is ZFS do much slower? I didn't find any errors and no tuneing tipps really work for me. So the conslusion should be btrfs espacially when there comes productive data. But one HHD randomly throws I/O errors. The Disk is ok (SMART&fsck&scrub), sata-cabel is changed (3 different types), controler is changed (PCIe and onboard) and src-data comes from 3TB HDD as the setup without any error with ZFS.
Actually I would prefer btrfs cause much more speed on same hardware but potential datalost isn't an option.
So.... What would you do? Did i missed a point?
Kind Regards
N0Tallow3D (to leave ext4 *g*)
new year, new challanges.
This year i picked filesystems and after 3 weeks reading documentation, tutorials and best practices I'm just running in a deadend.
I picked an existing power optimized server with 16 GB RAM, 1x 250GB nvme SSD (Proxmox+VM), 1x 256GB SSD SATA (Cache/Log), 2x 2TB HDD SATA (Raid1: Data), 3x 3TB HDD SATA (Raid1+Spare: Data).
The whole system isn't designed to be fast and the VMs contains only internal DNS, NAS and small services.
For my tests i made setups wie ZFS on both SSD and 2TB HDD as Raid1. The 3TB HDD are ext4. I'ved tested compress=off/lz4, dedup=on/off and serveral Cache/Log constellations on 256GB SSD.
In my second round I made setups with btrfs on the nvme SSD and luks+btrfs on 2TB HDD as RAID1.
Regardless what I'm doing the maximum transferrate (write) is between 40-60MB on ZFS. With btrfs I got 130-160MB. Is ZFS do much slower? I didn't find any errors and no tuneing tipps really work for me. So the conslusion should be btrfs espacially when there comes productive data. But one HHD randomly throws I/O errors. The Disk is ok (SMART&fsck&scrub), sata-cabel is changed (3 different types), controler is changed (PCIe and onboard) and src-data comes from 3TB HDD as the setup without any error with ZFS.
Actually I would prefer btrfs cause much more speed on same hardware but potential datalost isn't an option.
So.... What would you do? Did i missed a point?
Kind Regards
N0Tallow3D (to leave ext4 *g*)