[SOLVED] Can not reach PBS node from PVE using the Backup VLAN (although I can ping)

mppt

Member
Apr 9, 2021
9
0
6
53
Hello, this is my first post in the forum. It's my pleasure to be part of this community.

I have recently deployed a Proxmox VE node and a Proxmox Backup Server. The hardware is a couple of HPE ProLiant DL180 Gen10. Besides the iLO NIC, both servers have an integrated double NIC.

Port 1 in both servers is used for the Datacenter VLAN (VLAN 12) and links to an access port to the VLAN 12 on the switch. The Port 2 on the PBS node is used to connect this node to the VLAN 23, VLAN Backup (no routable VLAN). It is linked to an access port to VLAN 23 on the switch. On the other hand, the Port 2 on the PVE node is managed by Open vSwitch. I have defined 3 OVSIntPorts: VLAN 23 (Backup), VLAN 31 (for VM) and VLAN 32 (for other VM). The VLAN23 interface on this PVE node has an IP address assigned.

I can ping from PVE (from the host) to PBS (and viceversa) to IP addresses on VLAN 12 as well as on VLAN 23. So I guess there is no communication problems in my setup.

When I started to configure the PBS, I realized that I was not able to configure a PBS datastore on the PVE node. Actually, I could define the PBS datastore but I could not use it to define a backup job because when I clicked the combobox to select the datastore I got and communication failure error. Until I started to use IP addreses from the VLAN12 (Datacenter VLAN).

Although I can ping the PBS node using its VLAN 23 IP address, I can not use it in order to reach the node within a backup job.

This is my /etc/netwotk/interfaces file:

auto lo
iface lo inet loopback

iface eno1 inet manual

allow-vmbr1 eno2
iface eno2 inet manual
ovs_type OVSPort
ovs_bridge vmbr1

allow-vmbr1 vlan23
iface vlan23 inet static
address 10.23.0.11/16
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_mtu 9000
ovs_options tag=23
#VLAN 23 - Backup (MTU 9000)

allow-vmbr1 vlan31
iface vlan31 inet manual
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=31
#VLAN 31 - Production

allow-vmbr1 vlan32
iface vlan32 inet manual
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=32
#VLAN 32 - Virtual PC

auto vmbr0
iface vmbr0 inet static
address 10.0.12.11/24
gateway 10.0.12.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
#VLAN 12 - Datacenter

allow-ovs vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports eno2 vlan23 vlan31 vlan32
ovs_mtu 9000
#Open vSwitch bridge

I guess there is something I am no doing well in networking but the working ping is driving me crazy. By the way, the Port 2 in PVE node is linked to a trunk port on the switch.
 
Last edited:
Can you also show /etc/network/interfaces of PBS, please?
And what are the tagged and untagged VLANs of the switch port that PVE's eno2 is connected to?
 
Last edited:
  • Like
Reactions: mppt
Hi @ph0x

This is /etc/network/interfaces of PBS:

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
address 10.0.12.14/24
gateway 10.0.12.1

auto eno2
iface eno2 inet static
address 10.23.0.14/16
#VLAN 23 - Backup (MTU 9000)
mtu 9000


PBS:
eno1 connected to access port VLAN 12
eno2 connected to access port VLAN 23

PVE:
eno1 connected to access port VLAN 12
eno2 connected to trunk port VLAN 23, 31 and 32 (no native VLAN)

VLAN 12 -> Datacenter (10.0.12.0/24)
VLAN 23 -> Backup (10.23.0.0/16)
VLAN 31, 32 -> Production VLAN for VM

From PVE:
ping to 10.0.12.14 OK without hops [VLAN 12]
ping to 10.23.0.14 OK without hops [VLAN 23]

From PBS:
ping to 10.0.12.11 OK without hops [VLAN 12]
ping to 10.23.0.11 OK without hops [VLAN 23]
 
This looks sane.
MTU 9000 is also configured on the switch, I assume?
Can you post the content of /etc/pve/storage.cfg, then?
 
Last edited:
  • Like
Reactions: mppt
Hi @ph0x,

indeed the MTU value is configured on the switch at its default max value (9216) in all involved ports.

This is my current (full) /etc/pve/storage.cfg file:

dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

zfspool: local-zfs
pool tank
content images,rootdir
mountpoint /tank
sparse 0

dir: iso
path /tank/iso
content images,iso
prune-backups keep-all=1
shared 0

pbs: test
disable
datastore test
server 10.23.0.14
content backup
fingerprint 27:09:ea:fe:92:99:a8:be:65:88:c8:20:7d:66:b6:30:da:df:32:75:c3:0f:b2:4c:9b:0e:be:7d:5b:aa:e5:7a
prune-backups keep-all=1
username dldcn01@pbs

pbs: test2
datastore test
server 10.0.12.14
content backup
fingerprint 27:09:ea:fe:92:99:a8:be:65:88:c8:20:7d:66:b6:30:da:df:32:75:c3:0f:b2:4c:9b:0e:be:7d:5b:aa:e5:7a
prune-backups keep-all=1
username dldcn01@pbs

As you can see, the only difference between "test" and "test2" storages is the server parameter (and that "test" storage is disabled). Both storages in PVE point to the same storage en PBS (test), but "test2" uses 10.0.2.12 IP address (VLAN 12 - Datacenter) and "test" uses 10.23.0.14 IP address (VLAN 23 - Backup).

If I enable "test" storage I can not even define a buckup job because the storages combobox does not fill itself, but If I disable "test"storage, everything works fine. In fact, I have performed a backup job of 2 test VM using "test2" datastore.

I know that I have a workaround and I can perform backups (using Datastore VLAN), but I have defined a specific VLAN for backups and I am not able to use it.

It is my first Proxmox deployment, so I have a lack of experiencie with this wonderful piece of software. For sure, I am missing some detail at some point.
 
I know that I have a workaround and I can perform backups (using Datastore VLAN), but I have defined a specific VLAN for backups and I am not able to use it.

It is my first Proxmox deployment, so I have a lack of experiencie with this wonderful piece of software. For sure, I am missing some detail at some point.
Obviously, but I have to admit that I can't see which detail you're missing. All config seems to be correct.

Can you put a VM or a machine on the backup VLAN and try to reach the Backup's GUI through it?

Additionally, what's the output of ss -tulpn | grep proxmox on the backup server?
 
  • Like
Reactions: mppt
Hi @ph0x

I think I have solved the problem!!

I have focused my suspicions on MTU, so I have performed a test:

From PVE host I have pinged PBS with a frame size of 1000 bytes (lower than default 1500 bytes):

ping -s 1000 10.23.0.14 => OK, it worked

Now I have tried it with a frame size of 2000 bytes (greater than the default 1500 bytes):

ping -s 2000 10.23.0.14 => It failed. Catch you!!

So, I have reviewed the /etc/network/interfaces file and added the line ovs_mtu 9000 to the eno2 interface definition:

allow-vmbr1 eno2
iface eno2 inet manual
ovs_type OVSPort
ovs_bridge vmbr1
ovs_mtu 9000

It solved the problem!! This was the missing detail: I had set the MTU on the vmbr1 interface and on the vlan23 interface, but not on the physical eno2 interface.

Now I can enable the test storage on PVE and use it to define backup jobs.

Thanks @ph0x !!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!