Proxmox VM and the two nics

rimvydukas

Member
Dec 2, 2021
6
0
6
51
Hi,
I've encountered one problem and do not know how to solve it. My nic config:

iface eno1 inet manual

iface eno1.20 inet manual

iface eno1.191 inet manual

auto vmbr20
iface vmbr20 inet static
address 172.18.16.250/16
gateway 172.18.16.254
bridge-ports eno1.20
bridge-stp off
bridge-fd 0

auto vmbr191
iface vmbr191 inet static
address 172.20.17.251/24
bridge-ports eno1.191
bridge-stp off
bridge-fd 0

I have a VM connected to vmbr191 bridge. This VM has two vnics. Both vnics has IPs from the same subnet. For the simplicity I test net connectivity from the proxmox itself. So:

ping -I 172.18.16.250 "first vnic's IP" - everything works just fine.
ping -I 172.18.16.250 "second vnic's IP" - request timeout

When I ping both IPs using 172.20.17.251 as a source - I have no problems at all.

So my question - what am I missing?
 
You shouldn't have two IPs in the same subnet unless you know what you are doing and you set up some additional routes. This will confuse the OS and might screw up routing. Why do you need two vNICs in the same subnet? What re you trying to accomplish?
 
Hello all

I was just perusing the forum and came across this thread.

I am new to proxmox and currently deploying 1 server with proxmox. once i learned enough to feel confident i will deploy proxmox to a second server.

The vms on both servers will be OPNsense, Truenas, home assistant and win 10

in the case of both truenas and to some extent opnsense i will need to use more than one NIC.

My intention is to add a 10 or 20gb nic to both server and dedicate this nic to the truenas vm to allow rsync to complete the backup more quickly.

@Dunuin if i understand you correctly as long as i assign a different network ID and a different ip address to each port i should be ok.

BTW the dell server has w nic ports I will only assign these to opnsense on one server

Thank you for your patience and the help you will provide
 
My intention is to add a 10 or 20gb nic to both server and dedicate this nic to the truenas vm to allow rsync to complete the backup more quickly.
If you want to backup the data on the TrueNAS VM itself, I would use replication and not rsync.

@Dunuin if i understand you correctly as long as i assign a different network ID and a different ip address to each port i should be ok.
Yes, you just shouldn't add multiple NICs of a OS to the same subnet.
 
You shouldn't have two IPs in the same subnet unless you know what you are doing and you set up some additional routes. This will confuse the OS and might screw up routing. Why do you need two vNICs in the same subnet? What re you trying to accomplish?
Hi,

I thought one more time and maybe you are right, I'll leave this idea alone. Ended with different IP from different subnet.
 
If you want to backup the data on the TrueNAS VM itself, I would use replication and not rsync.
Thank you all

my understanding is that with replication compresses and encrypts the data. I need to copy and paste the data because in the event that the main server is down i just switch kodi to point to the backup server.
Is this assumption correct?

Yes, you just shouldn't add multiple NICs of a OS to the same subnet.
i only have, for the moment, 1 network 192.168.XXa.XXa. this is all on a gb network which is fine for my purpose.
In my mind, by adding the 2nd nic (just to use with rsync, i would assign a different ip address (192.168.xxa. xxb) to that nic AND MY rsync tasks will complete in less time (now it takes 17 hours)

once this is done i will dabble with vlans on my opnsense and add to a separate vlan my servers.

FYI i do not really need all this complication. I just do it to learn something new and amuse myself


Once again thank you all for your help
 
I have a similar issue (I'll start a new thread if ppl think it's the right thing to do);

Acquire an "older" server loaded up the HDs and memory. Installed PROXMOX. Created a couple of VMs (Windows 2012R2 and Debian). All of this is running behind the office router which hands out DHCP addresses to the VMs (I'll move them to statics after a bit). IP for the inside network is 192.168.0.0/24.

I can ping/surf without problems from the 2 VMs getting DHCP addresses. (Yay me!)

I wanted to move one of our troublesome servers to the same hardware as the third VM (this is also debian and will run asterisk)

VM3 will be in the subnet: 172.27.27.0/24 This subnet has its own gateway router.

I can successfully ping other IPs in the 172.27.27.0/24 range but I cannot get VM3 to route it's traffic through 172.27.27.1 (the gateway routers IP).

I guess I don't care how internet traffic gets to VM3 but my hope was to keep the subnets separate.

I have setup a second virtual bridge, given it a 172.27.27.253 IP address.
I have given VM3 the IP, 172.27.27.252

Any help would be appreciated.

Code:
ip a

lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether a0:d3:c1:ef:a9:b8 brd ff:ff:ff:ff:ff:ff
    altname enp3s0f0
eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a0:d3:c1:ef:a9:b9 brd ff:ff:ff:ff:ff:ff
    altname enp3s0f1
eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a0:d3:c1:ef:a9:ba brd ff:ff:ff:ff:ff:ff
    altname enp3s0f2
eno4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
    link/ether a0:d3:c1:ef:a9:bb brd ff:ff:ff:ff:ff:ff
    altname enp3s0f3
vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a0:d3:c1:ef:a9:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.235/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::a2d3:c1ff:feef:a9b8/64 scope link
       valid_lft forever preferred_lft forever
vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a0:d3:c1:ef:a9:bb brd ff:ff:ff:ff:ff:ff
    inet 172.27.27.253/24 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::a2d3:c1ff:feef:a9bb/64 scope link
       valid_lft forever preferred_lft forever
     
ip route show

default via 192.168.0.1 dev vmbr0 proto kernel onlink
172.27.27.0/24 dev vmbr1 proto kernel scope link src 172.27.27.253
192.168.0.0/24 dev vmbr0 proto kernel scope link src 192.168.0.235

cat /etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.0.235/24
        gateway 192.168.0.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address 172.27.27.253/24
        bridge-ports eno4
        bridge-stp off
        bridge-fd 0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!