best way to network for promox
I was using esxi 7 and switch to Promox but not sure which is the best way to pushing network setup
I was using esxi 7 and switch to Promox but not sure which is the best way to pushing network setup
there is no questionbest way to network for promox
I was using esxi 7 and switch to Promox but not sure which is the best way to pushing network setup
I have a 10g backbone but i don't get not even close to 1 gig and do you know why?there is no question
Please try to describe the setup you are going to build: what tasks/services will you implement? Do you have a budget? Is it for a company or just a hobby?
There is no answer for your (implied) question as there are networks with zero effort (just a single switch with 1 GBit/s) up to redundant setups with 100 GBit/s and a multitude of VLANs for five- or six(!)-digit dollars...
One possibly setup for a homelab without redundancy: buy a managed switch. This is the basic requirement to work with VLANs in your house. You want these Virtual-LANs to be able to use separate networks for segments like internet/dmz/wlan1/wlan2/guests/servers/printers/media/iot/nas/xzy - each representing independent networks (usually one network per segment, but this is not a hard rule). VLANs are logically separated but may travel in one single wire. Now in PVE build bridges, one for each separate network. Use these bridges for VMs in those networks. This approach requires to have a router with dedicated rules which traffic to allow and to forbid from/to each network.
Look here: https://pve.proxmox.com/wiki/Network_Configuration#sysadmin_network_vlan
PS: If you build a cluster it is recommended to have a separate wire for corosync, not a VLAN, using the same physical connection.
I have a 10g backbone but i don't get not even close to 1 gig and do you know why?
[code]...[/code]
tags.Unfortunately I have zero experience with SDN - the feature set of "classic" networking is just sufficient for me. So for SDN I am... out, sorry.I started to use SDN for the vlan setup.