Hi everyone. I recently setup a new box acting as a ZFS storage box (10 x 5TB 7200RPM drives), plus a VM server (2 x 480GB SSD's). Everything has been great, and I will be setting up a similar box soon to do ZFS replication, and host additional VM's.
Anyway, this is the first time I've configured Proxmox to utilize more than one network. Basically, I have 8 NIC's. I want 4 of them to be used only for Proxmox management reasons, and for ZFS NAS traffic. 2 NIC's will be in LACP for VM (LAN) traffic, and the remaining 2 NIC's will be in LACP for VM (DMZ) traffic.
Currently, it's all setup and working, except that the 2 NIC's configured for LAN VM traffic, also has an IP set and resolves to the host. I don't want this. Basically, I want to set that bond to only serve up VM's, and not be accessible to the host. How can I do that?
Here is my current interfaces config:
1) vmbr0/bond0 is for management and ZFS NAS traffic only. Can I prevent guest VM's from even seeing this as a possible NIC? If not, no big deal.
2) vmbr1/bond1 is for LAN VM traffic. How can I make it where the host doesn't respond to ZFS NAS, ssh, or proxmox web access?
3) vmbr2/bond2 is for DMZ VM traffic. Like #2, how can I make it only for VM traffic and the host not respond to it?
Thanks in advance!
Anyway, this is the first time I've configured Proxmox to utilize more than one network. Basically, I have 8 NIC's. I want 4 of them to be used only for Proxmox management reasons, and for ZFS NAS traffic. 2 NIC's will be in LACP for VM (LAN) traffic, and the remaining 2 NIC's will be in LACP for VM (DMZ) traffic.
Currently, it's all setup and working, except that the 2 NIC's configured for LAN VM traffic, also has an IP set and resolves to the host. I don't want this. Basically, I want to set that bond to only serve up VM's, and not be accessible to the host. How can I do that?
Here is my current interfaces config:
Code:
root@mjolnir:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth2 inet manual
iface eth4 inet manual
iface eth6 inet manual
iface eth1 inet manual
iface eth3 inet manual
iface eth5 inet manual
iface eth7 inet manual
allow-vmbr0 bond0
iface bond0 inet manual
ovs_bonds eth0 eth1 eth4 eth6
ovs_type OVSBond
ovs_bridge vmbr0
ovs_options bond_mode=balance-slb
allow-vmbr1 bond1
iface bond1 inet manual
ovs_bonds eth3 eth7
ovs_type OVSBond
ovs_bridge vmbr1
ovs_options bond_mode=balance-slb
allow-vmbr2 bond2
iface bond2 inet manual
ovs_bonds eth2 eth5
ovs_type OVSBond
ovs_bridge vmbr2
ovs_options bond_mode=balance-slb
auto vmbr0
iface vmbr0 inet static
address 172.16.1.200
netmask 255.255.255.0
gateway 172.16.1.254
ovs_type OVSBridge
ovs_ports bond0
auto vmbr1
iface vmbr1 inet static
address 172.16.1.201
netmask 255.255.255.0
ovs_type OVSBridge
ovs_ports bond1
auto vmbr2
iface vmbr2 inet static
address 172.16.2.202
netmask 255.255.255.0
ovs_type OVSBridge
ovs_ports bond2
1) vmbr0/bond0 is for management and ZFS NAS traffic only. Can I prevent guest VM's from even seeing this as a possible NIC? If not, no big deal.
2) vmbr1/bond1 is for LAN VM traffic. How can I make it where the host doesn't respond to ZFS NAS, ssh, or proxmox web access?
3) vmbr2/bond2 is for DMZ VM traffic. Like #2, how can I make it only for VM traffic and the host not respond to it?
Thanks in advance!