First off sorry for another SSD thread, I see a bunch of topics on this but some seem to contradict each other.
I obtained a new to me Dell R710, tossed in a H310 controller, cross flashed it and put it in IT mode with BIOS so I can boot from the drives. Now that I've been reading for hours trying to determine if I should SDD the new PVE or use spinning drives. I was originally going to just buy large spinning drives and not worry about wearout but I know my IO is going to severely suffer and that isn't going to be acceptable.
Now comes to my issue which I've been reading and reading and reading on for like 2 weeks now.
Is it the installation drive(s) that have all the repeating writes performed or is it the drives that house the VMs?
I currently have just the PVE server running with no VMs/CTs on it using 2x 1TB Seagate 7200 rpm drives in a zfs raid 1 pool and its fluctuating but hitting 1.6% IO delay....
I've been reading up on the Kingston DC500M SSDs and was thinking about going with 2x DC500M 480 GB (1139 TBW) drives for the OS, Templates, etc. These drives would be a zfs RAID1 setup provisioned to 400GB during the install/setup but didn't know if I would be better off running spinning drives for the OS/install and putting the DC500M drives for the VMs.
For a little more info, this server will take over the existing PVE server and run multiple VMs for snmp/health monitoring tools and other Windows/Linux hosts for testing.
IE. SmokePing, LibreNMS, OpenNMS, Xymon, Syslog Server, Pi-hole, Apache Intranet Webserver, Windows 10 Test box, possibly a Minecraft server for some on the side fun (this would be on its own SSD).
R710
2x - Xeon E5645 (Hex Core)
96 GB Ram
H310 Raid Controller (IT mode)
Temporarily 2x - 1TB Seagate 7200 rpm zfs Pool RAID1
I obtained a new to me Dell R710, tossed in a H310 controller, cross flashed it and put it in IT mode with BIOS so I can boot from the drives. Now that I've been reading for hours trying to determine if I should SDD the new PVE or use spinning drives. I was originally going to just buy large spinning drives and not worry about wearout but I know my IO is going to severely suffer and that isn't going to be acceptable.
Now comes to my issue which I've been reading and reading and reading on for like 2 weeks now.
Is it the installation drive(s) that have all the repeating writes performed or is it the drives that house the VMs?
I currently have just the PVE server running with no VMs/CTs on it using 2x 1TB Seagate 7200 rpm drives in a zfs raid 1 pool and its fluctuating but hitting 1.6% IO delay....
I've been reading up on the Kingston DC500M SSDs and was thinking about going with 2x DC500M 480 GB (1139 TBW) drives for the OS, Templates, etc. These drives would be a zfs RAID1 setup provisioned to 400GB during the install/setup but didn't know if I would be better off running spinning drives for the OS/install and putting the DC500M drives for the VMs.
For a little more info, this server will take over the existing PVE server and run multiple VMs for snmp/health monitoring tools and other Windows/Linux hosts for testing.
IE. SmokePing, LibreNMS, OpenNMS, Xymon, Syslog Server, Pi-hole, Apache Intranet Webserver, Windows 10 Test box, possibly a Minecraft server for some on the side fun (this would be on its own SSD).
R710
2x - Xeon E5645 (Hex Core)
96 GB Ram
H310 Raid Controller (IT mode)
Temporarily 2x - 1TB Seagate 7200 rpm zfs Pool RAID1