Hello @all,
we are new to Proxmox. Currently we are using Univention Corporate Server to virtualize 15 machines with 3 physical servers. We are lacking a shared storage and HA. Therefore we would like to setup a proxmox cluster with 5 physical machines, 3 identically configured machines for ceph and 2 machines for virtualisation.
We read a lot of posts in the forum, the wiki, the docs and thing we have enough background to start setting things up. We would like to ask if out network setup is suitable for the goals of high availabilty and redundance.
Attached you find a diagram of our network setup:
- 2 seperate 1 GBE networks for coro sync ring 0 and ring 1 with seperate switches from which we use 1 network for management (external access to proxmox web interfaces and lights out management)
- 2 seperate 10 GBE networks as ceph public networks with seperate switches and usage bonding
- 2 seperate 10 GBE network as ceph cluster network with seperate switches and usage of bonding
- 1 seperate 1 GBE network to access the virtual machines from the outside (DMZ / Intranet)
Questions:
- Is this suitable for redundancy?
- Is this suitable for good performance?
- Is the selected bond_mode (balance_rr) ok for use in a configration with seperate switches to acchieve also a good performance?
Thanks for your suggestions!
Best greets,
Mario Minati
we are new to Proxmox. Currently we are using Univention Corporate Server to virtualize 15 machines with 3 physical servers. We are lacking a shared storage and HA. Therefore we would like to setup a proxmox cluster with 5 physical machines, 3 identically configured machines for ceph and 2 machines for virtualisation.
We read a lot of posts in the forum, the wiki, the docs and thing we have enough background to start setting things up. We would like to ask if out network setup is suitable for the goals of high availabilty and redundance.
Attached you find a diagram of our network setup:
- 2 seperate 1 GBE networks for coro sync ring 0 and ring 1 with seperate switches from which we use 1 network for management (external access to proxmox web interfaces and lights out management)
- 2 seperate 10 GBE networks as ceph public networks with seperate switches and usage bonding
- 2 seperate 10 GBE network as ceph cluster network with seperate switches and usage of bonding
- 1 seperate 1 GBE network to access the virtual machines from the outside (DMZ / Intranet)
Questions:
- Is this suitable for redundancy?
- Is this suitable for good performance?
- Is the selected bond_mode (balance_rr) ok for use in a configration with seperate switches to acchieve also a good performance?
Thanks for your suggestions!
Best greets,
Mario Minati