One NIC with multiple IP's

dmnk68

New Member
Oct 9, 2020
6
0
1
Hello Proxmox Community,

I have a similar problem as here:
https://forum.proxmox.com/threads/one-nic-but-need-multiple-ips.5088/
I have one network port and multiple IP addresses in the same local subnet.
And I need to give each VM a separate IP so that they look like independent servers. But they must have a network through the host system gateway (not use their own MAC's).

I'm trying to use Open vSwitch for this. But I can't find suitable examples of settings.
Now my settings looks:
Code:
auto lo
iface lo inet loopback
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual

auto eno1
iface eno1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

auto vmip0
iface vmip0 inet static
    address 10.48.33.12/24
    gateway 10.48.33.1
    ovs_type OVSIntPort
    ovs_bridge vmbr0

auto vmbr0
iface vmbr0 inet manual
    ovs_type OVSBridge
    ovs_ports eno1 vmip0

But I can't go any further. When I try to assign a VM a separate IP (for example 10.48.33.45), it doesn't work. From the host system console, 10.48.33.45 is available, but the external network for the VM does not work in both directions.

11.png
12.png

I know that it possible to use network tags (like "ovs_options tag=01 trunks=1,2,3,4") in some way, but how in this situation?
 
hey,
you could add more IPs on a interface within a LXC Conatiner by manualle editing the network config of the container.
Note that you have to disable the provisioning process of proxmox, to set your config as static.

I think you even don't need OVS, you can use bridges
 
I've already tried using a regular Linux bridge, as well as a simple OVS bridge. But without success. The configuration below:

Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.48.33.12/24
    gateway 10.48.33.1
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0

------------------------------
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual

allow-ovs vmbr0
iface vmbr0 inet manual
    address 10.48.33.12/32
    gateway 10.48.33.1
    ovs_type OVSBridge
    ovs_ports eno1

The VM settings are listed above, I didn't change them. I don't need multiple IP addresses for a single VM, so I don't need to change VM settings bypassing Proxmox.
The host system (10.48.33.12) is running, but the VM (10.48.33.45) does not receive packets and does not have external access.
The host system and router from VM are pinged, but an error occurs from inside the VM: "No route to host".
13.png
Code:
# tcpdump -i eno1 -ne -c 50 'host 10.48.33.45 and ip proto \tcp'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes
17:45:10.439295 00:de:fb:ba:09:42 > 4e:47:65:9c:3b:ee, ethertype IPv4 (0x0800), length 66: 10.48.42.125.50395 > 10.48.33.45.22: Flags [S], seq 3766904334, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0
17:45:16.439679 00:de:fb:ba:09:42 > 4e:47:65:9c:3b:ee, ethertype IPv4 (0x0800), length 62: 10.48.42.125.50395 > 10.48.33.45.22: Flags [S], seq 3766904334, win 65535, options [mss 1460,nop,nop,sackOK], length 0
 
My experiments still have been unsuccessful. I tried to use both versions of the bridge (configuration in the previous post).
They are quite consistent with the two base guidelines for "Open vSwitch":
https://pve.proxmox.com/wiki/Open_vSwitch
https://github.com/openvswitch/ovs/blob/master/debian/openvswitch-switch.README.Debian
But it doesn't work.
In particular, there is a strange mismatch of configuration options. This manuals says:
"Any interfaces (Physical, OVSBonds, or OVSIntPorts) associated with a bridge should have their definitions prefixed with "allow-$brname $iface", e.g. allow-vmbr0 bond0"
But when I write this ("allow-vmbr0 eno1") in /etc/network/interfaces, the network completely stops working (rebooting doesn't help). Only the "auto eno1" line works.
Full configuration:
Code:
auto lo
iface lo inet loopback

auto eno1
#allow-vmbr0 eno1
iface eno1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0

allow-ovs vmbr0
iface vmbr0 inet static
    address 10.48.33.12/24
    gateway 10.48.33.1
    ovs_type OVSBridge
    ovs_ports eno1


I get completely uninformative lines in syslog:

Code:
Oct 13 15:39:25 vds ovs-ctl[1341]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 13 15:39:25 vds ovs-ctl[1341]: Creating empty database /etc/openvswitch/conf.db.
Oct 13 15:39:25 vds ovs-ctl[1341]: Starting ovsdb-server.
Oct 13 15:39:25 vds ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.0.0
Oct 13 15:39:25 vds ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.12.0 "external-ids:system-id=\"ac144b7b-4c49-49b1-adb2-63a2932c5113\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"debian\"" "system-version=\"10\""
Oct 13 15:39:25 vds ovs-ctl[1341]: Configuring Open vSwitch system IDs.
Oct 13 15:39:25 vds kernel: [   18.317192] openvswitch: Open vSwitch switching datapath
Oct 13 15:39:25 vds ovs-ctl[1341]: Inserting openvswitch module.
Oct 13 15:39:25 vds ovs-ctl[1341]: Starting ovs-vswitchd.
Oct 13 15:39:25 vds ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=vds.***********
Oct 13 15:39:25 vds ovs-ctl[1341]: Enabling remote OVSDB managers.
Oct 13 15:39:25 vds systemd[1]: Started Open vSwitch Internal Unit.
Oct 13 15:39:25 vds systemd[1]: Reached target Network (Pre).
Oct 13 15:39:25 vds systemd[1]: Starting Open vSwitch...
Oct 13 15:39:25 vds systemd[1]: Started Proxmox VE Login Banner.
Oct 13 15:39:25 vds openvswitch-switch[1414]: ovsdb-server is already running.
Oct 13 15:39:25 vds openvswitch-switch[1414]: ovs-vswitchd is already running.
Oct 13 15:39:25 vds ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=vds.***********
Oct 13 15:39:25 vds openvswitch-switch[1414]: Enabling remote OVSDB managers.
Oct 13 15:39:26 vds kernel: [   19.427596] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Oct 13 15:39:27 vds ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-br vmbr0
Oct 13 15:39:27 vds systemd-udevd[988]: Using default interface naming scheme 'v240'.
Oct 13 15:39:27 vds systemd-udevd[988]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Oct 13 15:39:27 vds systemd-udevd[988]: Could not generate persistent MAC address for ovs-system: No such file or directory
Oct 13 15:39:27 vds kernel: [   19.949901] device ovs-system entered promiscuous mode
Oct 13 15:39:27 vds systemd-udevd[988]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Oct 13 15:39:27 vds systemd-udevd[988]: Could not generate persistent MAC address for vmbr0: No such file or directory
Oct 13 15:39:27 vds kernel: [   20.043227] device vmbr0 entered promiscuous mode
Oct 13 15:39:27 vds systemd[1]: Started Open vSwitch.
Oct 13 15:39:27 vds systemd[1]: Starting Network initialization...
Oct 13 15:39:27 vds networking[1556]: networking: Configuring network interfaces
Oct 13 15:39:28 vds ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-br vmbr0 -- --if-exists clear bridge vmbr0 auto_attach controller external-ids fail_mode flood_vlans ipfix mirrors netflow other_config protocols sflow -- --if-exists clear interface vmbr0 mtu_request external-ids other_config options
Oct 13 15:39:28 vds ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port vmbr0 eno1 -- --if-exists clear port eno1 bond_active_slave bond_mode cvlans external_ids lacp mac other_config qos tag trunks vlan_mode -- --if-exists clear interface eno1 mtu_request external-ids other_config options -- set Port eno1 tag=12
Oct 13 15:39:28 vds kernel: [   20.816511] device eno1 entered promiscuous mode
Oct 13 15:39:28 vds systemd[1]: Started Network initialization.
Oct 13 15:39:28 vds systemd[1]: Reached target Network.

I would be very grateful for your help. It would be useful not only for me, since it is a typical configuration.
Please note that the example from the Proxmox manual didn't work for me:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_default_configuration_using_a_bridge
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.10.2
        netmask 255.255.255.0
        gateway 192.168.10.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

In my version, the same thing doesn't work !!!
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.48.33.12/24
    gateway 10.48.33.1
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0
 
I'll add some information about the network inside the VM.
Used configuration:
Code:
auto lo
iface lo inet loopback

auto eno1
#allow-vmbr0 eno1
iface eno1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

allow-ovs vmbr0
iface vmbr0 inet static
    address 10.48.33.12/24
    gateway 10.48.33.1
    ovs_type OVSBridge
    ovs_ports eno1

The VM is accessible from the host system console, but it is not accessible from external hosts and does not have external access.
Code:
root@vds:~# ssh root@10.48.33.45
Welcome to Vds1, TurnKey GNU/Linux 16.0 (Debian 10/Buster)

root@vds1 ~# curl 10.48.33.71
curl: (7) Failed to connect to 10.48.33.71 port 80: No route to host

root@vds1 ~# ip route show
default via 10.48.33.1 dev eth1 onlink
10.48.33.0/24 dev eth1 proto kernel scope link src 10.48.33.45

root@vds1 ~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8e:e0:77:3c:5e:83 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.48.33.45/24 brd 10.48.33.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::8ce0:77ff:fe3c:5e83/64 scope link
valid_lft forever preferred_lft forever
 
I was forced to make a radical experiment: I deleted Proxmox v6.2 and installed v5.4
After that, I completely reproduced all the previous network settings and installed the container VM (CT from Turnkey ISO).
All network settings started to work as they should. So some system bug in Proxmox v6.2 is obvious. I'll try to clarify the problem later.
 
I tried to reproduce the problem completely, but I couldn't. Nevertheless, it was, which is a factor of great danger.
I got the impression that if you simultaneously make intensive changes to the network settings in /etc/network/interfaces, the Proxmox control panel, and the VM's network settings, plus, doing server reboots to apply network settings, it is possible that such a conflict may occur with very serious consequences. It would not be so bad if the elements of the Proxmox network system gave out informative human-readable messages. But now this is the case, which provokes the purchase of a license for Proxmox.