I have been using Synology & VMM for years, recently outgrew their prosumer hardware, got myself a R730 and been learning proxmox for last few weeks. From experience, things associated with file system raid, and overall architecture are super hard to change down the track, and some practically prohibitive to change.
I'm hoping gurus here can help validate my high level planning. My goal is
The hardware I got are:
My setup plan. Please correct where I'm wrong.
Main Node will have Z1 raid on SSD and runs most of the VMs most of the time, with HA on ZFS replication for certain VMs, and group to limit VM within main node & 2nd node.
2nd Node will pretty much be a failover on standby for main node, so I'm thinking a few 3.5NAS drive in Raid-Z2. In addition, I'm thinking running
Originally I was planning to run pf-sense bare metal on the N100, but thought might be able to VM pfsense and provide 3rd node proxmox quorum , 2 birds 1 stone at the cost of extra setup complexity.
The old synology can be repurposed with Active Business Backup to further backup some of the more critical VMs
Are my understanding below correct?
CEPH would not be possible with my totally asymmetric setup, as it would require at least same pool capacity on each node, and SSD vs HDD speed difference can cause issue with ceph sync. NFS will create single point of failure, leaving ZFS replication my best option.
I can host PBS either on synology or proxmox, practically not that much difference either way, so I better put it where most drive space which will be 2nd proxmox node.
From what I learned, SSD node + spinning rust node should work reliably for VM migration on ZFS replication, but I'm unsure if it's going to be reliable to virtualize pf-sense and have it double as quorum device for my micro cluster. Gut feeling this would be too good to be true. Perhaps I should just leave pf-sense out of cluster.
Then expanding on that, many challenges like below, but but it'd be super cool if this can be pulled off. Would it be reliable (not just possible) to spin up another pf-sense VM on Node1 as passive via pf-sense built in HA, not proxmox HA.
I'm hoping gurus here can help validate my high level planning. My goal is
- To have 1 VM with 99% up time, other VM not critical. The 99% VM is about 200GB currently, grows 50~100GB per year, it's based on wordpress->docker->ubuntu
- 99.9% network up time with primary WAN + backup WAN with 4G mini PCIe card
- And relatively safe tinker space where stupid mistakes wouldn't affect above, well, as much as possible.
The hardware I got are:
- Main node: R730 with 4x intel SATA SSD, more compute power than I'll ever need
- 2nd Node: not here yet, thinking of getting R730XD LFF, or build my own i3~i5 level + 6~8x SATA
- 3rd Node: Intel N100 mini pc 8G/128G, 4x 2.5G port
- a couple synology NAS
My setup plan. Please correct where I'm wrong.
Main Node will have Z1 raid on SSD and runs most of the VMs most of the time, with HA on ZFS replication for certain VMs, and group to limit VM within main node & 2nd node.
2nd Node will pretty much be a failover on standby for main node, so I'm thinking a few 3.5NAS drive in Raid-Z2. In addition, I'm thinking running
- blue iris with separate ZFS pool, not critical at all
- Proxmox Backup Server in a VM to backup VMs of main node, also not critical
Originally I was planning to run pf-sense bare metal on the N100, but thought might be able to VM pfsense and provide 3rd node proxmox quorum , 2 birds 1 stone at the cost of extra setup complexity.
The old synology can be repurposed with Active Business Backup to further backup some of the more critical VMs
Are my understanding below correct?
CEPH would not be possible with my totally asymmetric setup, as it would require at least same pool capacity on each node, and SSD vs HDD speed difference can cause issue with ceph sync. NFS will create single point of failure, leaving ZFS replication my best option.
I can host PBS either on synology or proxmox, practically not that much difference either way, so I better put it where most drive space which will be 2nd proxmox node.
From what I learned, SSD node + spinning rust node should work reliably for VM migration on ZFS replication, but I'm unsure if it's going to be reliable to virtualize pf-sense and have it double as quorum device for my micro cluster. Gut feeling this would be too good to be true. Perhaps I should just leave pf-sense out of cluster.
Then expanding on that, many challenges like below, but but it'd be super cool if this can be pulled off. Would it be reliable (not just possible) to spin up another pf-sense VM on Node1 as passive via pf-sense built in HA, not proxmox HA.
- the setup complexity (chance for error) is on another level
- 4G mini PCIe wan can't be synced across active & passive, unless its in its own modem and vLan switching.
- even if all above is resolved, then my WAN switches would become the single point of failure. Even if switches are way more stable in general