Hi guys
Installed a few servers using SATA disks no SSDs in RAID10 using proxmox installer. Its for internal use but want the best performance I can get.
Servers run the following:
Intel Xeon E5-1620 3.5GHz
6 x 1 TB SATA Enterprise disks at 7200
ZFS install in raid10 ofcourse
64GB ECC memory where I limited the ZFS and arc max conf to use 24GB
atime set to off and primarycache changed to metadata
Also set swappiness to 10 as per proxmox wiki.
Now I moved some from our hardware raid setup to this server with OpenVZ and having some lags at times. Even the few KVM VPS servers seem to "freeze" randomly 1 a day for some odd reason.
Will this help:
http://letsgetdugg.com/2009/10/21/zfs-slow-performance-fix/
I see and quote this part:
"
SATA disks do Native Command Queuing while SAS disks do Tagged Command Queuing, this is an important distinction. Seems like OpenSolaris/Solaris is optimized for the latter with a 32 wide command queue set by default. This completely saturates the SATA disks with IO commands in turn making the system unusable for short periods of time.
Dynamically set the ZFS command queue to 1 to optimize for NCQ.
echo zfs_vdev_max_pending/W0t1 | mdb -kw
And add to /etc/system
set zfs:zfs_vdev_max_pending=1
Enjoy your OpenSolaris server on cheap SATA disks!"
How do I check this?
Installed a few servers using SATA disks no SSDs in RAID10 using proxmox installer. Its for internal use but want the best performance I can get.
Servers run the following:
Intel Xeon E5-1620 3.5GHz
6 x 1 TB SATA Enterprise disks at 7200
ZFS install in raid10 ofcourse
64GB ECC memory where I limited the ZFS and arc max conf to use 24GB
atime set to off and primarycache changed to metadata
Also set swappiness to 10 as per proxmox wiki.
Now I moved some from our hardware raid setup to this server with OpenVZ and having some lags at times. Even the few KVM VPS servers seem to "freeze" randomly 1 a day for some odd reason.
Will this help:
http://letsgetdugg.com/2009/10/21/zfs-slow-performance-fix/
I see and quote this part:
"
SATA disks do Native Command Queuing while SAS disks do Tagged Command Queuing, this is an important distinction. Seems like OpenSolaris/Solaris is optimized for the latter with a 32 wide command queue set by default. This completely saturates the SATA disks with IO commands in turn making the system unusable for short periods of time.
Dynamically set the ZFS command queue to 1 to optimize for NCQ.
echo zfs_vdev_max_pending/W0t1 | mdb -kw
And add to /etc/system
set zfs:zfs_vdev_max_pending=1
Enjoy your OpenSolaris server on cheap SATA disks!"
How do I check this?
Last edited: