Poll: Ceph vs NAS (NFS) vs ZFS over iSCSI for VM Storage

joshin

Renowned Member
Jul 23, 2013
124
10
83
Phoenix, Arizona
Question for the hive mind.

I have 6x 960GB Samsung SSD (853t & pm963) drives 'left over' from an upgrade to bigger drives, and wish to use them for shared storage of fairly low I/O virtual machines.
1) All 6 Drives on NFS share from a FreeNAS (or similar dedicated server) with 96GB RAM + 2x 10GbE
2) A Ceph install with 5 drives - 1 per Proxmox server
3) A Proxmox server with all 6 drives attached and it serving ZFS over iSCSI - same 2x 10GbE networking
4) ?

What do you all think?
 
Hi,

2) A Ceph install with 5 drives - 1 per Proxmox server
You need at minimum 3 Server an 9 Disk at total. So I would say this is no option.
1) All 6 Drives on NFS share from a FreeNAS (or similar dedicated server) with 96GB RAM + 2x 10GbE
Is ok but and less complex
3) A Proxmox server with all 6 drives attached and it serving ZFS over iSCSI - same 2x 10GbE networking
Is complex but more performance.
 
Thanks @wolfgang.

For Ceph:
I think you're mistaken on the minimum number of OSDs for Ceph. Nine might be the minimum number to get a decently performing pool of spinning rust, but so far as I know, you can make a single OSD an available pool (without replication).
And it was a bit subtle, but I did say I'd have 5 servers with OSDs - three also acting as monitors.

Does that change your opinion?
 
Sure, you can use one OSD per server, but a good starting point is the 3x hosts + 3 OSDs/host. Also you need to think about recovery of a single disk or whole host, once you grow your cluster, to not fill up the remaining OSDs.