Hi everyone,
this is my first forum post here. Hope this post is not against any rules.
We are currently in the process of setting up a new proxmox environment to replace our vsphere setup. This obviously also includes proxmox support.
We're planning a completly new environment with pve, pbs, ceph, 100G Ethernet and new Firewalls.
The current plan looks like this:

We are starting with five DELL R7715 Servers with 1TB RAM. Each device has 4x100Gbit/s and 4x25Gbit/s Ethernet Interfaces.
This Host OS will be on a 500GB SSD Raid1. For Ceph storage we have 8x8TB NVMe SSDs
For the network side we're planning to connect each server with multiple links to multiple Switches. We do not have MLAG.
Each proxmox-function (ceph, backup, mgmt, corosync, etc) gets its own vlan. The corosync-vlans are not shared between switches.
My main question is: Would the network setup shown in the image above work and is it best-practise "compliant" or are there any improvements we could implement?
this is my first forum post here. Hope this post is not against any rules.
We are currently in the process of setting up a new proxmox environment to replace our vsphere setup. This obviously also includes proxmox support.
We're planning a completly new environment with pve, pbs, ceph, 100G Ethernet and new Firewalls.
The current plan looks like this:

We are starting with five DELL R7715 Servers with 1TB RAM. Each device has 4x100Gbit/s and 4x25Gbit/s Ethernet Interfaces.
This Host OS will be on a 500GB SSD Raid1. For Ceph storage we have 8x8TB NVMe SSDs
For the network side we're planning to connect each server with multiple links to multiple Switches. We do not have MLAG.
Each proxmox-function (ceph, backup, mgmt, corosync, etc) gets its own vlan. The corosync-vlans are not shared between switches.
My main question is: Would the network setup shown in the image above work and is it best-practise "compliant" or are there any improvements we could implement?