Proxmox network

John Wick

Member
Apr 25, 2017
76
1
6
42
Hi,

We plan to separate network between proxmox (vmbr0), ceph clusterand VM's guest. We have 2 quad nic for VM's guest, 1 quad nic for ceph cluster and 2 nic build in for vmbr0.

How do we achieve this? For proxmox and cluster, i can see it clearly. But for VM's guest not so clear. VM's guest will be vlan because we plan to use LACP. Please advice.

Thanks.
 
Can i set up as below?:
Code:
# network interface settings
 auto lo
 iface lo inet loopback
 iface eth0 inet manual
 iface eth1 inet manual
 iface eth2 inet manual

auto bond0
iface bond0 inet manual
        slaves eth1 eth2
        bond_miimon 100
        bond_mode 4

auto bond0.100
iface bond0.100 inet manual
        vlan-raw-device bond0

auto bond0.200
iface bond0.200 inet manual
        vlan-raw-device bond0

auto vmbr0
 iface vmbr0 inet static
   address  192.168.0.2
   netmask  255.255.255.0
   gateway  192.168.0.1
   bridge_ports eth0
   bridge_stp off
   bridge_fd 0
 auto vmbr1
 iface vmbr1 inet manual
   bridge_ports bond0.100 bond0.200
   bridge_stp off
   bridge_fd 0
 
Hi John,
you should create the network related thread at Network & Firewall section,
What do you mean by 1 quad nic and 2 quad nic?

You can assign every single nic to each mentioned domain.
here is how I manage, if it makes you clear?
Code:
cat interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet static
        address  10.10.2.11
        netmask  255.255.255.0
#GlusterFS

auto eth3
iface eth3 inet static
        address  10.10.1.11
        netmask  255.255.255.0
#Cluster-Communication

auto vmbr0
iface vmbr0 inet static
        address  192.168.0.11
        netmask  255.255.255.0
        gateway  192.168.0.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
#Management

auto vmbr1
iface vmbr1 inet manual
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0
#ENPS
 
I have no experience with vlanconfig directly on linuxhosts. The easiest way is to set vlan's globaly on the gateway (like Layer3 Swtich). Then you tag easily vlan's per VM in the PVE Webinterface.
 
I have 2 vlan that already set on cisco 3560. Both vlan is for VM's public ip. From current kvm, for VM's guest we set bonding. What i mean quad nic is a single network card has 4 port. So this port we make bonding 802.3ad. We dont want to do nat from VM's guest. We plan to use bridge and vlan. For VM's guest, we use 2 nic (eth1 and eth2) make a bridge and vlan (not success yet).

We have 4 ceph server. Each ceph hase quad nic (4 port single card). So we want to put 4 nic to be a single, so if ceph rebuild it can use all card.
 
We trying to add vlan into bonding and bridge. First i do bonding, after that i put vlan id in bonding and set bridge. Bridge using 1 bonding vlan is no problem. But when i put bridge_ports bond1.280 bond1.290, restart network will failed. But If bond1.280 restart network will success.
 
your vmbr1 stanza is misconfigured and will not work as you designed- you cant have two vlans as bridge ports unless you intend on bridging your networks, in which case why bother.

It also doesnt follow your design notes.
We plan to separate network between proxmox (vmbr0), ceph clusterand VM's guest. We have 2 quad nic for VM's guest, 1 quad nic for ceph cluster and 2 nic build in for vmbr0.

lets define your networks:

# Quad NIC iface- Guest Traffic
# These get plugged into switch ports assigned an LACP LAG with your guest vlan untagged
auto bond0
iface bond0 inet manual
slaves eth0 eth1 eth2 eth3
bond_miimon 100
bond_mode 4

# Quad NIC iface- Ceph
# These get plugged into switch ports assigned as an LACP LAG with your ceph vlan untagged
auto bond1
iface bond1 inet manual
address x.x.x.x
netmask y.y.y.y
gateway x.x.x.z
slaves eth4 eth5 eth6 eth7
bond_miimon 100
bond_mode 4

# Dual NIC iface- Management
# These get plugged into switch ports assigned as an LACP LAG with your management vlan untagged
auto bond2
iface bond2 inet manual
slaves eth8 eth9
bond_miimon 100
bond_mode 4

# Bridges
# Management
auto vmbr0
iface vmbr0 inet static
address x.x.x.x
netmask y.y.y.y
gateway x.x.x.z
bridge_ports bond2
bridge_stp off
bridge_fd 0

# Guest
auto vmbr1
iface vmbr1 inet static
address x.x.x.x
netmask y.y.y.y
gateway x.x.x.z
bridge_ports bond0
bridge_stp off
bridge_fd 0
 
I want to use vlan as current Kvm setup, we put vlan in bonding. Maybe debian base on same as redhat. If i put ethX in vlan, I cannot use lacp right?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!