Hi all,
I'm fairly new to ZFS and I'm after a bit of advice on the best config for my setup.
I have some old-ish hardware, but it was right for my budget and I have plenty of spares to see me into the medium term future :-
* Dell R510
* 12 magnetic drives - 10 x 15k SAS 600G, 2 x 7.2k 2TB
* 16 cores
* 64GB RAM
I'm converting from ESXi already on these servers, and I installed my first server using my existing hardware RAID config (/dev/sda is RAID6 on the 10x15k drives, /dev/sdb is RAID 1 on the 2x2TB drives). PVE 5.4-3 is installed with root as ext4 and swap on the same LVM, and the remainder of /dev/sda is ZFS for VM images. /dev/sdb is entirely ZFS and I use this for backups/ISO's. ZFS has compression set to lz4. It's working great and my overall read/write performance is much better than on ESX, but I do get extremely slow performance when I'm doing intensive IO on the host (such as qemu-img work) and guests grind to a virtual halt. It recovers when disk IO has calmed and it's not a huge issue as this operation is rare.
I'm about to convert my second server and thinking to do it different now I've learned lessons and read a bit more. From what I've read, I believe I should disable the PERC and give the drives entirely to ZFS. If I were to replicate the same setup, I would RAIDZ2 the 10x15k's and RAIDZ1 the 2x2TB's. I've also just read that I should limit the zfs_arc_max to 50% of my RAM and tune down the swappiness to 10. But I'm unsure if this drive config is best practice and whether this would help mitigate the IO choke I describe above? I'm on a limited budget so can't fork out for any SSD at this point (maybe in future).
As ZFS is new for me, and based on my previous disaster stories with software RAID, I'm nervous about disabling the PERC, but I'm open to changing my prejudice on that.
Any suggestions please?
Thank you,
Rich
I'm fairly new to ZFS and I'm after a bit of advice on the best config for my setup.
I have some old-ish hardware, but it was right for my budget and I have plenty of spares to see me into the medium term future :-
* Dell R510
* 12 magnetic drives - 10 x 15k SAS 600G, 2 x 7.2k 2TB
* 16 cores
* 64GB RAM
I'm converting from ESXi already on these servers, and I installed my first server using my existing hardware RAID config (/dev/sda is RAID6 on the 10x15k drives, /dev/sdb is RAID 1 on the 2x2TB drives). PVE 5.4-3 is installed with root as ext4 and swap on the same LVM, and the remainder of /dev/sda is ZFS for VM images. /dev/sdb is entirely ZFS and I use this for backups/ISO's. ZFS has compression set to lz4. It's working great and my overall read/write performance is much better than on ESX, but I do get extremely slow performance when I'm doing intensive IO on the host (such as qemu-img work) and guests grind to a virtual halt. It recovers when disk IO has calmed and it's not a huge issue as this operation is rare.
I'm about to convert my second server and thinking to do it different now I've learned lessons and read a bit more. From what I've read, I believe I should disable the PERC and give the drives entirely to ZFS. If I were to replicate the same setup, I would RAIDZ2 the 10x15k's and RAIDZ1 the 2x2TB's. I've also just read that I should limit the zfs_arc_max to 50% of my RAM and tune down the swappiness to 10. But I'm unsure if this drive config is best practice and whether this would help mitigate the IO choke I describe above? I'm on a limited budget so can't fork out for any SSD at this point (maybe in future).
As ZFS is new for me, and based on my previous disaster stories with software RAID, I'm nervous about disabling the PERC, but I'm open to changing my prejudice on that.
Any suggestions please?
Thank you,
Rich