The company I work with is looking to move off VMware, and we've landed on trying Proxmox. I've personally jumped into Proxmox head first to try and test what I can for our team, and have worked through much of the unknowns so we can be ready.
However, one item I don't have the setup to test at home is configuring a shared network storage for the cluster where the VM files will reside. Before my company commits half a million in hardware to this leap, I've convinced the team to do a test run with $13,000 of enterprise grade hardware off eBay so we can do some basic load testing and find out how live migrations might work, see how different issues might surface, determine a standard system for monitoring, etc.
The basic hardware setup we will be testing with:
3x Dell PE R740XD, each with 768GB ram, dual Xeon 6152 CPU's, Nvidia P40 GPU, dual SFP28 NIC, and BOSS card with a pair of 128gb m.2 sata's
10x Micron 9300 Pro NVMe 3.84TB to put in 1 of the servers as the cluster NAS
1x Dell 48 port SFP28 switch to connect it all together.
The planned setup for our testing -
2 of the servers will be configured in a Proxmox cluster, with the 3rd configured with the 10 NVMe drives as the storage for the cluster. I am planning to connect to it via iSCSI.
What I need help with... Rephrase - the help I am aware that I need, is deciding on an OS for the server that will act as the NAS. I want to use a RAID5/RAID6 disk setup so the 10 x 3.84TB drives can give me 30TB of storage or more for the VM hard disks. I've read a decent amount of posts centering on TrueNAS Scale and RAIDZ1/RAIDZ2 (not anywhere close to calculating the formulas yet), and I'm just not quite onboard with taking a IOPS performance hit or storage space hit that seems to come with it.
Has anyone used a NAS distro that can get a good performance, great efficiency of space utilization, and I don't lose the data if a drive fails? But I'm also not wasting drives to just mirroring?
However, one item I don't have the setup to test at home is configuring a shared network storage for the cluster where the VM files will reside. Before my company commits half a million in hardware to this leap, I've convinced the team to do a test run with $13,000 of enterprise grade hardware off eBay so we can do some basic load testing and find out how live migrations might work, see how different issues might surface, determine a standard system for monitoring, etc.
The basic hardware setup we will be testing with:
3x Dell PE R740XD, each with 768GB ram, dual Xeon 6152 CPU's, Nvidia P40 GPU, dual SFP28 NIC, and BOSS card with a pair of 128gb m.2 sata's
10x Micron 9300 Pro NVMe 3.84TB to put in 1 of the servers as the cluster NAS
1x Dell 48 port SFP28 switch to connect it all together.
The planned setup for our testing -
2 of the servers will be configured in a Proxmox cluster, with the 3rd configured with the 10 NVMe drives as the storage for the cluster. I am planning to connect to it via iSCSI.
What I need help with... Rephrase - the help I am aware that I need, is deciding on an OS for the server that will act as the NAS. I want to use a RAID5/RAID6 disk setup so the 10 x 3.84TB drives can give me 30TB of storage or more for the VM hard disks. I've read a decent amount of posts centering on TrueNAS Scale and RAIDZ1/RAIDZ2 (not anywhere close to calculating the formulas yet), and I'm just not quite onboard with taking a IOPS performance hit or storage space hit that seems to come with it.
Has anyone used a NAS distro that can get a good performance, great efficiency of space utilization, and I don't lose the data if a drive fails? But I'm also not wasting drives to just mirroring?