I believe I am at the point where I am done doing basic testing/"tinkering" Proxmox and I really like it.
All my hardware works well and is fast, clustering works good, PCI-e and USB passthrough is awesome, OVS is working ok.
I basically just want a 3 node cluster for management purposes, each node runs different VMs for specific things. I don't think I really need HA as nothing is critical enough to mess with it, but DRBD9 looks interesting. If a node goes down, I have backups, so the world won't end if I don't implement HA. I am also not interested in using my NAS for shared storage as each node has good hardware.
I wanted to use ZFS for everything, but during my testing it was extremely slow (FSYNCS/SECOND 73.58) compared to using the RAID controllers (FSYNCS/SECOND 1500+). I only have consumer (Samsung 850 Evo) SSDs that I could use for cache and log. Uploading an ISO of Debian 8 sucked up 8GBs of RAM. With having limited RAM, I figured it isn't worth it, as the RAID controllers do a good job. I may or may not test again on one node.
Node1 (LAN VMs):
AMD 8 core CPU
16GB RAM
IBM ServeRAID M5015
- PVE on 2x 500GB (RAID1)
- 6x 1TB (RAID10) 2.7TB
split storage LVM (1.3TB for VM Disks and 1.4TB for Backups)
6x 1GB Intel NIC (presently using 1 for management and 1 for corosync)
Node2 (WAN VMs):
AMD 8 core CPU
32GB RAM
IBM ServeRAID M5015
- PVE on 2x 500GB (RAID1)
- VM Disks 6x 500GB (RAID10) 1.3TB
6x 1GB Intel NIC (presently using 1 for management and 1 for corosync)
Node3 (Testing VMs):
AMD 8 core CPU
32GB RAM
IBM ServeRAID M5015
- PVE on 2x 500GB (RAID1)
- VM Disks 6x 500GB (RAID10) 1.3TB
6x 1GB Intel NIC (presently using 1 for management and 1 for corosync)
Node4 (Quorum only):
AMD 2 core CPU
8GB RAM
PVE on 2x 500GB (RAID1)
3x 1GB Intel NIC (presently using 1 for management and 1 for corosync)
FREENAS Server (Physical):
NFS for ISO share
NFS for long term backups
Questions:
- Does setup make sense as a basic cluster scenario? Suggestions, words of wisdom/caution?
- With this setup I assume that I can easily have Node2 and Node3 use the "Backup" storage space on Node1 to perform and store backups?
- Is DRBD9 stable enough to use with this setup, is a bad idea from a performance stand point due to having only 1GB NICs? I may not worry about using DRBD9, just asking for an opinion.
Thank you in advance for any responses.
All my hardware works well and is fast, clustering works good, PCI-e and USB passthrough is awesome, OVS is working ok.
I basically just want a 3 node cluster for management purposes, each node runs different VMs for specific things. I don't think I really need HA as nothing is critical enough to mess with it, but DRBD9 looks interesting. If a node goes down, I have backups, so the world won't end if I don't implement HA. I am also not interested in using my NAS for shared storage as each node has good hardware.
I wanted to use ZFS for everything, but during my testing it was extremely slow (FSYNCS/SECOND 73.58) compared to using the RAID controllers (FSYNCS/SECOND 1500+). I only have consumer (Samsung 850 Evo) SSDs that I could use for cache and log. Uploading an ISO of Debian 8 sucked up 8GBs of RAM. With having limited RAM, I figured it isn't worth it, as the RAID controllers do a good job. I may or may not test again on one node.
Node1 (LAN VMs):
AMD 8 core CPU
16GB RAM
IBM ServeRAID M5015
- PVE on 2x 500GB (RAID1)
- 6x 1TB (RAID10) 2.7TB
split storage LVM (1.3TB for VM Disks and 1.4TB for Backups)
6x 1GB Intel NIC (presently using 1 for management and 1 for corosync)
Node2 (WAN VMs):
AMD 8 core CPU
32GB RAM
IBM ServeRAID M5015
- PVE on 2x 500GB (RAID1)
- VM Disks 6x 500GB (RAID10) 1.3TB
6x 1GB Intel NIC (presently using 1 for management and 1 for corosync)
Node3 (Testing VMs):
AMD 8 core CPU
32GB RAM
IBM ServeRAID M5015
- PVE on 2x 500GB (RAID1)
- VM Disks 6x 500GB (RAID10) 1.3TB
6x 1GB Intel NIC (presently using 1 for management and 1 for corosync)
Node4 (Quorum only):
AMD 2 core CPU
8GB RAM
PVE on 2x 500GB (RAID1)
3x 1GB Intel NIC (presently using 1 for management and 1 for corosync)
FREENAS Server (Physical):
NFS for ISO share
NFS for long term backups
Questions:
- Does setup make sense as a basic cluster scenario? Suggestions, words of wisdom/caution?
- With this setup I assume that I can easily have Node2 and Node3 use the "Backup" storage space on Node1 to perform and store backups?
- Is DRBD9 stable enough to use with this setup, is a bad idea from a performance stand point due to having only 1GB NICs? I may not worry about using DRBD9, just asking for an opinion.
Thank you in advance for any responses.
Last edited: