Hi!
I've moved some VMs from Fedora 30 KVM hosts to one running Debian Buster (virsh dumpxml / define).
I need to clarify that I am not yet using a full deployment of proxmox, it's more a hybrid system of PVE kernel and QEMU.
The VMs boot fine but their network is not working correctly. I am not sure if their packaged version has different default settings that I am now missing.
Even the VMs on the same bridge do not reach each other:
For example, I can see ARP flow out of the VM -> VLAN -> Bridge -> Bond VLAN (QinQ) -> HW-Switch.
The VM never receives a reply. If there would be something broken in this network setup, I would expect the HW-network to be unreachable but OVS internal communication should always work.
I am running the same setup (including network setup) on FC30 and it works perfectly fine, I tried to get the same working with debian but I failed.
Any hints whats wrong here?
I've browsed all logs I found so far but there are no errors.
This is currently blocking my evaluation of migration from OpenNebula.
Thank you.
Kind regards
Kevin
I've moved some VMs from Fedora 30 KVM hosts to one running Debian Buster (virsh dumpxml / define).
I need to clarify that I am not yet using a full deployment of proxmox, it's more a hybrid system of PVE kernel and QEMU.
The VMs boot fine but their network is not working correctly. I am not sure if their packaged version has different default settings that I am now missing.
Even the VMs on the same bridge do not reach each other:
Code:
root at vm2021 ~ # ovs-vsctl show
32f20e3e-f937-4b42-b426-133e1422638c
Bridge "ovsbr39"
Port "ovsbr39"
Interface "ovsbr39"
type: internal
Port "bond0.39"
Interface "bond0.39"
Bridge "ovsbr40"
Port "ovsbr40"
Interface "ovsbr40"
type: internal
Port "vnet0"
tag: 388
Interface "vnet0"
Port "vnet1"
tag: 388
Interface "vnet1"
Port "bond0.40"
Interface "bond0.40"
ovs_version: "2.10.1"
For example, I can see ARP flow out of the VM -> VLAN -> Bridge -> Bond VLAN (QinQ) -> HW-Switch.
The VM never receives a reply. If there would be something broken in this network setup, I would expect the HW-network to be unreachable but OVS internal communication should always work.
Code:
root at vm2021 ~ # ovs-ofctl dump-ports ovsbr40
OFPST_PORT reply (xid=0x2): 4 ports
port LOCAL: rx pkts=433827, bytes=24017868, drop=31, errs=0, frame=0,
over=0, crc=0
tx pkts=373, bytes=48450, drop=0, errs=0, coll=0
port "bond0.40": rx pkts=433188, bytes=25766602, drop=0, errs=0,
frame=0, over=0, crc=0
tx pkts=2124, bytes=115925, drop=0, errs=0, coll=0
port vnet0: rx pkts=1721, bytes=80663, drop=0, errs=0, frame=0, over=0,
crc=0
tx pkts=552, bytes=66168, drop=0, errs=0, coll=0
port vnet1: rx pkts=196, bytes=9624, drop=0, errs=0, frame=0, over=0,
crc=0
tx pkts=75, bytes=8741, drop=0, errs=0, coll=0
Code:
root at vm2021 ~ # ovs-dpctl show
system at ovs-system:
lookups: hit:381405 missed:58486 lost:0
flows: 509
masks: hit:3095782 total:7 hit/pkt:7.04
port 0: ovs-system (internal)
port 1: ovsbr39 (internal)
port 2: bond0.39
port 3: ovsbr40 (internal)
port 4: bond0.40
port 5: vnet0
port 6: vnet1
Code:
root at vm2021 ~ # ovs-dpctl dump-flows | grep 129.161
recirc_id(0),in_port(5),eth(src=52:54:00:bc:c6:b4,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=192.168.129.162,tip=192.168.129.161,op=1/0xff),
packets:1651, bytes:69342, used:0.717s,
actions:push_vlan(vid=388,pcp=0),3,4,pop_vlan,6
recirc_id(0),in_port(6),eth(src=52:54:00:e2:9a:65,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=192.168.129.163,tip=192.168.129.161,op=1/0xff),
packets:214, bytes:8988, used:0.104s,
actions:push_vlan(vid=388,pcp=0),3,4,pop_vlan,5
recirc_id(0),in_port(4),eth(src=52:54:00:81:e8:96,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=388,pcp=0),encap(eth_type(0x0806),arp(sip=192.168.129.161,tip=192.168.129.163,op=1/0xff)),
packets:35, bytes:2100, used:0.476s, actions:3,pop_vlan,5,6
Code:
root at vm2021 ~ # uname -a
Linux vm2021.cloud03.srvfarm.net 5.0.21-5-pve #1 SMP PVE 5.0.21-10 (Wed, 13
Nov 2019 08:27:10 +0100) x86_64 GNU/Linux
Code:
root at vm2021 ~ # ovs-vsctl --version
ovs-vsctl (Open vSwitch) 2.10.1
DB Schema 7.16.1
Code:
root at vm2021 ~ # iptables-save
# Generated by xtables-save v1.8.2 on Sat Nov 16 11:03:44 2019
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# Completed on Sat Nov 16 11:03:44 2019
I am running the same setup (including network setup) on FC30 and it works perfectly fine, I tried to get the same working with debian but I failed.
Any hints whats wrong here?
I've browsed all logs I found so far but there are no errors.
This is currently blocking my evaluation of migration from OpenNebula.
Thank you.
Kind regards
Kevin