Hi there,
I am currently configuring access to a separated storage network (with a separat switch) on a VM running on Proxmox and I'm having the issue that the VM can only ping some machines on that storage network afterwards but not others. The VM can reach and ping machines on the regular Proxmox management network just fine.
The Proxmox host has a 2-port PCIe 10Gb NIC that I configured as an active-backup bond in Proxmox, then set this bond as the interface for a bridge called vmbr1. This bridge is given to the VM as interface1 (interface0 being the normal Proxmox management network vmbr0).
This bridge is visible as network interface ens19 inside the VM and is configured there via /etc/network/interfaces like this:
After a reboot, the VM shows that it applied both the jumbo frames and the IP (via ip a):
Now strangely, when I try to ping other machines using this storage network, the ping only goes through for some of them but not others. The machines where the pings work can also ping back, the others cannot. The other machines can all ping each other perfectly fine.
The config on all machines (pingable and non-pingable) is the same (except for the assigned IP, of course) including the jumbo frames. There is no apparent reason why some should be reachable and some not. They all connect physically to the same switch. There is no firewall in play between them.
Running ip a on the other machines in this network show the same parameters as inside the VM except that the other machines don't show fq_codel but mq as the queueing mechanism (?) for their interface.
Example of ip a from one of the non-pingable machines:
Example of ip a from one of the pingable machines:
Not sure what is happening here... any suggestions would be helpful!
-----------------------
Solution:
TL/DR: The ports on the switch, where the Proxmox host was connected, were set to run in LACP mode on the switch side. But the
If anybody can explain to me, WHY this is the case in this misconfiguration I'm more than happy to hear! ;-)
I am currently configuring access to a separated storage network (with a separat switch) on a VM running on Proxmox and I'm having the issue that the VM can only ping some machines on that storage network afterwards but not others. The VM can reach and ping machines on the regular Proxmox management network just fine.
The Proxmox host has a 2-port PCIe 10Gb NIC that I configured as an active-backup bond in Proxmox, then set this bond as the interface for a bridge called vmbr1. This bridge is given to the VM as interface1 (interface0 being the normal Proxmox management network vmbr0).
This bridge is visible as network interface ens19 inside the VM and is configured there via /etc/network/interfaces like this:
Code:
auto ens19
iface ens19 inet static
mtu 9000
address 192.168.0.122
netmask 255.255.255.0
After a reboot, the VM shows that it applied both the jumbo frames and the IP (via ip a):
Code:
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:f1:07:cc brd ff:ff:ff:ff:ff:ff
altname enp0s19
inet 192.168.0.122/24 brd 192.168.0.255 scope global ens19
valid_lft forever preferred_lft forever
inet6 fe80::be24:cb78:fef1:36c/64 scope link
valid_lft forever preferred_lft forever
Now strangely, when I try to ping other machines using this storage network, the ping only goes through for some of them but not others. The machines where the pings work can also ping back, the others cannot. The other machines can all ping each other perfectly fine.
The config on all machines (pingable and non-pingable) is the same (except for the assigned IP, of course) including the jumbo frames. There is no apparent reason why some should be reachable and some not. They all connect physically to the same switch. There is no firewall in play between them.
Running ip a on the other machines in this network show the same parameters as inside the VM except that the other machines don't show fq_codel but mq as the queueing mechanism (?) for their interface.
Example of ip a from one of the non-pingable machines:
Code:
3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 90:e2:ba:7d:77:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.0.101/24 brd 192.168.0.255 scope global enp1s0f1
valid_lft forever preferred_lft forever
inet6 fe80::92e2:7cff:fe7d:2fd/64 scope link
valid_lft forever preferred_lft forever
Example of ip a from one of the pingable machines:
Code:
4: enp129s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 3c:ec:ef:38:f4:ca brd ff:ff:ff:ff:ff:ff
inet 192.168.0.102/24 brd 192.168.0.255 scope global enp129s0f0
valid_lft forever preferred_lft forever
inet6 fe80::3eec:efff:fefe:84ba/64 scope link
valid_lft forever preferred_lft forever
Not sure what is happening here... any suggestions would be helpful!
-----------------------
Solution:
TL/DR: The ports on the switch, where the Proxmox host was connected, were set to run in LACP mode on the switch side. But the
bond0
I created in Proxmox was set as "active-backup". This, for some reason, split the network in two for the Proxmox host and its VM.If anybody can explain to me, WHY this is the case in this misconfiguration I'm more than happy to hear! ;-)
Last edited: