I have the following hardware and would like some advice on how to best configure the storage for High I/O performance with a reasonable level of redundancy.
The Proxmox host will have a mix of linux and windows VMs, but primarily for Windows VMs running infrastructure services: AD, App Server, File Server, VDI & Remote Desktop Session Hosts. The session host VMs will be the most resource hungry with up to 30 users per session host.
Dell PowerEdge R640
I currently have the zfs_arc_max set to 16GB with 8TB of storage - should this be increased if there is sufficient host RAM? I would like to reserve as much RAM as possible for VMs as each user on a session host needs about 2-4GB and previous experiments have seen the VMs competing for RAM when letting ZFS manage the RAM at its default 50%.
The Proxmox host will have a mix of linux and windows VMs, but primarily for Windows VMs running infrastructure services: AD, App Server, File Server, VDI & Remote Desktop Session Hosts. The session host VMs will be the most resource hungry with up to 30 users per session host.
Dell PowerEdge R640
- 2 x Intel® Xeon® SP Gold
- 16-Core @ 2.10GHz Skylake
Hyper-Threading-Technology Virtualization (Intel-VT) - 512 GB DDR4 ECC
- 4 x 3.84 TB NVMe SSD Datacenter Edition
- 2 x 960 GB NVMe SSD Datacenter Edition
I currently have the zfs_arc_max set to 16GB with 8TB of storage - should this be increased if there is sufficient host RAM? I would like to reserve as much RAM as possible for VMs as each user on a session host needs about 2-4GB and previous experiments have seen the VMs competing for RAM when letting ZFS manage the RAM at its default 50%.