ZFS on rotative drives super bad performance

pveperf is not a useful benchmark in this way. try is with something that is either your application or simulates your application.
So the copy of one file within the file server. As I told is going continously up&down, it reach 200MBps (megabyte) and the it goes to 0 and so on so forth.
For 13GB of file it need more than 3 minutes, so more than 3 times the necessary time.
But, if I do on old server with hw raid card probably it goes more than 200MBps, I didn’t tried
 
ok, that make sense.

the behavior you describe suggests you have problems with your hardware. you can keep an eye on dmesg to see what messages pop up during IO, which should give some indication. A smart test on your drives is probably in order as well.
 
ok, that make sense.

the behavior you describe suggests you have problems with your hardware. you can keep an eye on dmesg to see what messages pop up during IO, which should give some indication. A smart test on your drives is probably in order as well.
No messages on dmesg
The smart via proxmox gui is ok
On iDRAC all is green
 
I run ZFS on standalone servers and Ceph on clustered servers.

Usually, on Dell servers, the write cache on hard drives is disabled because it is assumed they will be used on a BBU RAID controller.

Since, ZFS & Ceph don't play nice with RAID controllers and only with HBA controllers, you'll need to enable the write cache on the hard drives.

On SAS drives, use the sdparm command (sdparm -s WCE=1 -S /dev/sd[x]). With SATA drives, use the hdparm command (hdparm -W 1 /dev/sd[x])

You can confirm if the cache is enabled via 'dmesg -t' command for each drive after rebooting the server.

P.S. If hdparm doesn't work, you have to use /etc/hdparm.conf and enable write cache on each drive.
 
Last edited:
  • Like
Reactions: MarkusKo