Hello,
My server have LSI 9361-8i 2 GB cache BBU with 8x14GB SAS Disk.
CPU is a 16 x AMD Ryzen 7 PRO 3700 8-Core Processor (1 Socket) with 128Gb RAM
I'm benching Proxmox with 8.4, Boot is on SSD drive.
My main storage was initialized with RAID5, LVM on virtual disk.
I made some "load" fio bench, here the result :
iowait stay around 60%, no cpu usage, load around 10
I read a lot of thing about ZFS (better / stronger) so i break my RAID5 and put disks as JBOD
Then i create raidz1 pool and run same test.
iowait move up to 75%, cpu usage around 30%, load around 100 !!!
Then i create raid10 pool and run same test.
iowait move up to 85%, cpu usage around 15%, load around 100 !!! Yes 100 again !!
Did i miss someting ?
Does hardware RAID definitively outperform ZFS in terms of performance and overall impact on the server?
I'm frustrated because I've read a lot about ZFS, especially with Proxmox, but here it's the opposite...
Thanks
Nsc
My server have LSI 9361-8i 2 GB cache BBU with 8x14GB SAS Disk.
CPU is a 16 x AMD Ryzen 7 PRO 3700 8-Core Processor (1 Socket) with 128Gb RAM
I'm benching Proxmox with 8.4, Boot is on SSD drive.
My main storage was initialized with RAID5, LVM on virtual disk.
I made some "load" fio bench, here the result :
iowait stay around 60%, no cpu usage, load around 10
I read a lot of thing about ZFS (better / stronger) so i break my RAID5 and put disks as JBOD
Then i create raidz1 pool and run same test.
iowait move up to 75%, cpu usage around 30%, load around 100 !!!
Then i create raid10 pool and run same test.
iowait move up to 85%, cpu usage around 15%, load around 100 !!! Yes 100 again !!
Did i miss someting ?
Does hardware RAID definitively outperform ZFS in terms of performance and overall impact on the server?
I'm frustrated because I've read a lot about ZFS, especially with Proxmox, but here it's the opposite...
Thanks
Nsc