Hi,
First excuse me if there are errors in my language, I'm french so maybe everything is not well written.
I have a stack of Cisco C2960s
I have already configured 5 nodes in a cluster without any problems (Dell 1xR510, 3xR710, 1xR720)
Now I try to configure a Dell R530 the same way to include it in the cluster and it's not fully working.
As the configuration seems to be exactly the same, I don't know why it's not working.
The one not working is PVE2 (R530). I will give example of PVE3 (R720) which is working well.
The LACP dedicated for Ceph is working great both all the nodes (PVE included).
The LACP dedicated for Guests and Host access is not working only in PVE2
In the switch
In the Nodes
and when I remove vlan3 interface from /etc/network/interface
I don't know what can be done, I'm out of idea.
I need to have access to the host and to some guest in the same vlan
Any help will be greatly appreciated
Edit :
there's a difference in fact on package version, but it should not be a regression except if I don't use the good syntax or the good logic
First excuse me if there are errors in my language, I'm french so maybe everything is not well written.
I have a stack of Cisco C2960s
I have already configured 5 nodes in a cluster without any problems (Dell 1xR510, 3xR710, 1xR720)
Now I try to configure a Dell R530 the same way to include it in the cluster and it's not fully working.
As the configuration seems to be exactly the same, I don't know why it's not working.
The one not working is PVE2 (R530). I will give example of PVE3 (R720) which is working well.
The LACP dedicated for Ceph is working great both all the nodes (PVE included).
The LACP dedicated for Guests and Host access is not working only in PVE2
In the switch
- For PVE 2
Code:
interface Port-channel7
description PVE 2 VMs Agregat
switchport mode trunk
!
interface GigabitEthernet1/0/7
description PVE 2 VMs
switchport mode trunk
channel-group 7 mode active
!
interface GigabitEthernet2/0/7
description PVE 2 VMs
switchport mode trunk
channel-group 7 mode active
!
Code:
interface Port-channel8
description PVE 2 Ceph Agregat
switchport access vlan 100
switchport mode access
!
interface GigabitEthernet1/0/8
description PVE 2 Ceph
switchport access vlan 100
switchport mode access
channel-group 8 mode active
!
interface GigabitEthernet2/0/8
description PVE 2 Ceph
switchport access vlan 100
switchport mode access
channel-group 8 mode active
!
- For PVE 3
Code:
interface Port-channel9
description PVE 3 VMs Agregat
switchport mode trunk
!
interface GigabitEthernet1/0/9
description PVE 3 VMs
switchport mode trunk
channel-group 9 mode active
!
interface GigabitEthernet2/0/9
description PVE 3 VMs
switchport mode trunk
channel-group 9 mode active
!
Code:
interface Port-channel10
description PVE 3 Ceph Agregat
switchport access vlan 100
switchport mode access
!
interface GigabitEthernet1/0/10
description PVE 3 Ceph
switchport access vlan 100
switchport mode access
channel-group 10 mode active
!
interface GigabitEthernet2/0/10
description PVE 3 Ceph
switchport access vlan 100
switchport mode access
channel-group 10 mode active
!
- LACP seems to be well on every interface
Code:
gra-prod-sw-coeur-001#sh etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
d - default port
Number of channel-groups in use: 18
Number of aggregators: 18
Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
7 Po7(SU) LACP Gi1/0/7(P) Gi2/0/7(P)
8 Po8(SU) LACP Gi1/0/8(P) Gi2/0/8(P)
9 Po9(SU) LACP Gi1/0/9(P) Gi2/0/9(P)
10 Po10(SU) LACP Gi1/0/10(P) Gi2/0/10(P)
In the Nodes
- For PVE2
Code:
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
#Interface physique Virtualisation
auto eno2
iface eno2 inet manual
#Interface physique Virtualisation
auto eno3
iface eno3 inet manual
#Interface physique Ceph
auto eno4
iface eno4 inet manual
#Interface physique Ceph
auto bond100
iface bond100 inet manual
bond-slaves eno3 eno4
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#Agrégat Ceph
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#Agrégat Virtualisation
auto vmbr100
iface vmbr100 inet static
address 192.168.100.2/24
bridge-ports bond100
bridge-stp off
bridge-fd 0
#Interface Ceph
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
#Interface Virtualisation
auto vlan3
iface vlan3 inet static
address 192.168.3.102/24
gateway 192.168.3.1
vlan-raw-device vmbr0
#Interface Management
Code:
root@gra-prod-srv-pve-002:~# systemctl status networking.service
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2021-11-12 12:18:31 CET; 2min 1s ago
Docs: man:interfaces(5)
Process: 1144 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Main PID: 1144 (code=exited, status=1/FAILURE)
Nov 12 12:18:29 gra-prod-srv-pve-002 systemd[1]: Starting Raise network interfaces...
Nov 12 12:18:30 gra-prod-srv-pve-002 ifup[1144]: Waiting for vmbr100 to get ready (MAXWAIT is 2 seconds).
Nov 12 12:18:30 gra-prod-srv-pve-002 ifup[1144]: Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).
Nov 12 12:18:31 gra-prod-srv-pve-002 ifup[1144]: RTNETLINK answers: Operation not supported
Nov 12 12:18:31 gra-prod-srv-pve-002 ifup[1144]: run-parts: /etc/network/if-up.d/bridgevlanport exited with return code 255
Nov 12 12:18:31 gra-prod-srv-pve-002 ifup[1144]: ifup: failed to bring up vlan3
Nov 12 12:18:31 gra-prod-srv-pve-002 systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Nov 12 12:18:31 gra-prod-srv-pve-002 systemd[1]: networking.service: Failed with result 'exit-code'.
Nov 12 12:18:31 gra-prod-srv-pve-002 systemd[1]: Failed to start Raise network interfaces.
and when I remove vlan3 interface from /etc/network/interface
Code:
root@gra-prod-srv-pve-002:~# systemctl status networking.service
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2021-11-12 12:14:52 CET; 2min 17s ago
Docs: man:interfaces(5)
Process: 1126 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS)
Main PID: 1126 (code=exited, status=0/SUCCESS)
Nov 12 12:14:49 gra-prod-srv-pve-002 systemd[1]: Starting Raise network interfaces...
Nov 12 12:14:51 gra-prod-srv-pve-002 ifup[1126]: Waiting for vmbr100 to get ready (MAXWAIT is 2 seconds).
Nov 12 12:14:51 gra-prod-srv-pve-002 ifup[1126]: Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).
Nov 12 12:14:52 gra-prod-srv-pve-002 systemd[1]: Started Raise network interfaces.
- For PVE3
Code:
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
#Interface physique Virtualisation
auto eno2
iface eno2 inet manual
#Interface physique Virtualisation
auto eno3
iface eno3 inet manual
#Interface physique Ceph
auto eno4
iface eno4 inet manual
#Interface physique Ceph
auto bond100
iface bond100 inet manual
bond-slaves eno3 eno4
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#Agrégat Ceph
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#Agrégat Virtualisation
auto vmbr100
iface vmbr100 inet static
address 192.168.100.3/24
bridge-ports bond100
bridge-stp off
bridge-fd 0
#Interface Ceph
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
#Interface Virtualisation
auto vlan3
iface vlan3 inet static
address 192.168.3.103/24
gateway 192.168.3.1
vlan-raw-device vmbr0
#Interface Management
Code:
root@gra-prod-srv-pve-003:~# systemctl status networking.service
● networking.service - Network initialization
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2021-10-07 10:39:37 CEST; 1 months 5 days ago
Docs: man:interfaces(5)
man:ifup(8)
man:ifdown(8)
Main PID: 985 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 7372)
Memory: 0B
CGroup: /system.slice/networking.service
Oct 07 10:39:36 gra-prod-srv-pve-003 systemd[1]: Starting Network initialization...
Oct 07 10:39:36 gra-prod-srv-pve-003 networking[985]: networking: Configuring network interfaces
Oct 07 10:39:37 gra-prod-srv-pve-003 systemd[1]: Started Network initialization.
I don't know what can be done, I'm out of idea.
I need to have access to the host and to some guest in the same vlan
Any help will be greatly appreciated
Edit :
there's a difference in fact on package version, but it should not be a regression except if I don't use the good syntax or the good logic
Code:
root@gra-prod-srv-pve-002:~# pveversion
pve-manager/6.4-13/9f411e79 (running kernel: 5.4.143-1-pve)
root@gra-prod-srv-pve-002:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-helper: 6.4-8
pve-kernel-5.4: 6.4-7
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve1~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.6-pve1~bpo10+1
Code:
root@gra-prod-srv-pve-003:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.140-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-6
pve-kernel-helper: 6.4-6
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph: 15.2.14-pve1~bpo10
ceph-fuse: 15.2.14-pve1~bpo10
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-1
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1
Last edited: