Hi all,
I'm having a bit of a pickle in the way of fully upgrading my cluster to pve 7.
Beyond having to fiddle my interface options (underscores being replaced by hyphens on bond interfaces, didn't expect that one ) and managing to get ifupdown2 installed _after_ being shut off from the world (thank god IPKVM exists, though ours still use java ):
Hypervisors work just fine by themselves, ssh/gui up via GigE bonds/bridges, ceph syncs fine and so does the clustering stack over a bonded double 10G loop between the three servers (we have our own racks, but barely starting to put 10G equipement in), PVE 6 and 7 don't have any issue talking to each other.
10G cards are dual-port intel X520 (ixgbe)
But, I'm having a hard time grasping why vlan'd CT's and VM's have zero chatter to and from the PVE 7 host (though public IP seems ok, it wasn't working initally for some reason).
I can see ARP requests from CT's on the PVE 6 hosts arriving on bridge10.410 (using tcpdump), but not on vmbr410 (which only bridges bridge10.410)
I did have hwaddress set in the configuration but having ipupdown2 I took them out (with no effect)
Also tried turning tso/gso/etc... off with ethtool just in case
The stack is:
Here's the relevant configuration:
Vlan 410 is for inter CT/VM communication, each 'client' gets his own VLAN tag on top of the bridge. 498/499 are used for the clustering ring and storage ring (ceph here)
Vlan 601, is a simpler vmbr601 on bond0.601, switchs in front, for public IP space.
vmbr0 (on bond0) remains in the native vlan for administration
CTs and VMs work perfectly fine on the PVE6 hosts
(I know mtu 9000 looks odd knowing there are vlan stacks to count, but at least if I got a simple ping through, I'd be happy right now)
I have also encountered a slight issue when hotplugging interfaces to a CT (haven't tested VMs) where the GUI throws an error but sort of writes the configuration anyhow:
The question isn't "if" I am doing something wrong... more of "What am I doing wrong" ? the lack of output on a simple
Thanks in advance for your input ;-)
JaXX./.
I'm having a bit of a pickle in the way of fully upgrading my cluster to pve 7.
Beyond having to fiddle my interface options (underscores being replaced by hyphens on bond interfaces, didn't expect that one ) and managing to get ifupdown2 installed _after_ being shut off from the world (thank god IPKVM exists, though ours still use java ):
Hypervisors work just fine by themselves, ssh/gui up via GigE bonds/bridges, ceph syncs fine and so does the clustering stack over a bonded double 10G loop between the three servers (we have our own racks, but barely starting to put 10G equipement in), PVE 6 and 7 don't have any issue talking to each other.
10G cards are dual-port intel X520 (ixgbe)
But, I'm having a hard time grasping why vlan'd CT's and VM's have zero chatter to and from the PVE 7 host (though public IP seems ok, it wasn't working initally for some reason).
I can see ARP requests from CT's on the PVE 6 hosts arriving on bridge10.410 (using tcpdump), but not on vmbr410 (which only bridges bridge10.410)
I did have hwaddress set in the configuration but having ipupdown2 I took them out (with no effect)
Also tried turning tso/gso/etc... off with ethtool just in case
The stack is:
CT with vlan tagged interface on vmbr410 | |
vmbr410 bridge_vlan_aware yes | |
bridge10.410 | |
bridge10 | |
bond100 | bond101 |
bond-slaves enp130s0f0 enp132s0f0 | bond-slaves enp130s0f1 enp132s0f1 |
Here's the relevant configuration:
Code:
auto enp130s0f0
iface enp130s0f0 inet manual
mtu 9000
auto enp132s0f0
iface enp132s0f0 inet manual
mtu 9000
auto bond100
iface bond100 inet manual
bond-slaves enp130s0f0 enp132s0f0
bond-min-links 1
mtu 9000
bond-miimon 100
bond-mode 802.3ad
auto enp130s0f1
iface enp130s0f1 inet manual
mtu 9000
auto enp132s0f1
iface enp132s0f1 inet manual
mtu 9000
auto bond101
iface bond101 inet manual
bond-slaves enp130s0f1 enp132s0f1
bond-min-links 1
mtu 9000
bond-miimon 100
bond-mode 802.3ad
auto bridge10
iface bridge10 inet manual
bridge-ports bond100 bond101
bridge-stp on
mtu 9000
auto bridge10.410
iface bridge10.410 inet manual
mtu 9000
auto vmbr410
iface vmbr410 inet manual
bridge_ports bridge10.410
bridge_stp off
bridge_fd 0
bridge_vlan_aware yes
mtu 9000
Vlan 410 is for inter CT/VM communication, each 'client' gets his own VLAN tag on top of the bridge. 498/499 are used for the clustering ring and storage ring (ceph here)
Vlan 601, is a simpler vmbr601 on bond0.601, switchs in front, for public IP space.
vmbr0 (on bond0) remains in the native vlan for administration
CTs and VMs work perfectly fine on the PVE6 hosts
(I know mtu 9000 looks odd knowing there are vlan stacks to count, but at least if I got a simple ping through, I'd be happy right now)
Code:
root@xxxxx-priv-01:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
ceph: 15.2.14-pve1
ceph-fuse: 15.2.14-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.3.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-6
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.9-2
proxmox-backup-file-restore: 2.0.9-2
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
root@xxxxx-priv-01:~#
I have also encountered a slight issue when hotplugging interfaces to a CT (haven't tested VMs) where the GUI throws an error but sort of writes the configuration anyhow:
Code:
root@xxxxx-priv-01:~# /sbin/ip link add name veth103357010i2 mtu 9000 type veth peer name veth103357010i2p mtu 9000 addr 6A:24:80:2B:C7:C2
Error: argument "veth103357010i2p" is wrong: "name" not a valid ifname
root@xxxxx-priv-01:~# dpkg -S /sbin/ip
iproute2: /sbin/ip
root@xxxxx-priv-01:~# apt list iproute2
Listing... Done
iproute2/stable,now 5.10.0-4 amd64 [installed]
root@xxxxx-priv-01:~#
The question isn't "if" I am doing something wrong... more of "What am I doing wrong" ? the lack of output on a simple
tcpdump -nvvi vmbr410
make me wonder if the bridge is restricting vlans, do the ids need to be explicitly declared ? (where as they've never been before)Thanks in advance for your input ;-)
JaXX./.
Last edited: