How do you tag a interface in Proxmox with a VLAN?

victorhooi

Member
Apr 3, 2018
172
11
18
33
Hi,

I'm setting up a new 4-node Promox/Ceph HA cluster using 100Gb networking.

Each node will have a single 100Gb link. (Later on, we may look at a second 100Gb link for redundancy).

Previously, we were using 4 x 10Gb links per node:
  • 1 x 10Gb for VM traffic and management
  • 1 x 10Gb for heartbeat (corosync)
  • 2 x 10Gb for Ceph (we didn't separate out the Ceph public vs cluster network)
My question is - on the new architecture, assuming we have to work with a single 100Gb link for now and use VLAN tagging to re-create the above - what is the best way of doing things?

On the switch side, we can configure subinterfaces so that we can get multiple VLANs down the same physical link. We could then use QOS to manage bandwidth between the VLANs.

But how do we make Proxmox aware of these VLANs?

I'm trying to edit network devices in the Network configuration page, and I don't see any way to assign a VLAN to an interfaces, or create multiple ones across a physical link?

Screen Shot 2019-12-05 at 3.21.36 pm.png

Although I do see a column for VLAN aware at the top.

Thanks,
Victor
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,592
375
103
Hi,

VLAN you have to set in the network config.
if you like a VLAN tag 5 on eno2 just add this lines.

Code:
iface vlan5
    address 192.168.13.102/24
    vlan-id 5
    vlan-raw-device eno2
[Code]
 

Arvyr

New Member
Nov 29, 2019
3
0
1
33
Storkow (Mark)
Hi,

are there plans to implement VLAN assignment via Web GUI?
This would make the whole process a bit easier.

Kind regards
 

victorhooi

Member
Apr 3, 2018
172
11
18
33
OK, so I ended up setting a native VLAN on my switch, so that untagged traffic gets tagged with ID 12 (which is the VLAN for normal Proxmox traffic - 15 is for Ceph, 19 is for Corosync).

I noticed that there is the option to create a VLAN in the Proxmox GUI:

Screen Shot 2020-03-21 at 8.01.40 am.png

Anyhow, I have created my two extra VLANs (15 and 19) as such:

Screen Shot 2020-03-21 at 8.03.06 am.png
Screen Shot 2020-03-21 at 8.03.54 am.png

Unfortunately - even after a reboot, it doesn't seem to work.

If I try to create a cluster, it seems to hang at waiting for quorom:

Code:
# pvecm add 10.0.12.3 -link0 10.0.19.4
Please enter superuser (root) password for '10.0.12.3': ********
Establishing API connection with host '10.0.12.3'
The authenticity of host '10.0.12.3' can't be established.
X509 SHA256 key fingerprint is 3E:74:83:0F:6F:BB:7A:DC:E1:02:DE:BC:8F:EC:0C:08:9A:03:68:4D:E1:FF:17:A1:E3:7F:19:10:78:3C:45:93.
Are you sure you want to continue connecting (yes/no)? yes
Login succeeded.
Request addition of this node
Join request OK, finishing setup locally
stopping pve-cluster service
backup old database to '/var/lib/pve-cluster/backup/config-1584737566.sql.gz'
waiting for quorum...
Furthermore, I can't seem to ping other hosts in the subnet for the VLANs:
Code:
root@foo-kvm03:~# ping 10.0.19.1
PING 10.0.19.1 (10.0.19.1) 56(84) bytes of data.
From 10.0.19.5 icmp_seq=1 Destination Host Unreachable
From 10.0.19.5 icmp_seq=2 Destination Host Unreachable
From 10.0.19.5 icmp_seq=3 Destination Host Unreachable
^C
--- 10.0.19.1 ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 104ms
pipe 4
root@foo-kvm03:~# ping 10.0.19.3
PING 10.0.19.3 (10.0.19.3) 56(84) bytes of data.
From 10.0.19.5 icmp_seq=1 Destination Host Unreachable
From 10.0.19.5 icmp_seq=2 Destination Host Unreachable
From 10.0.19.5 icmp_seq=3 Destination Host Unreachable
^C
--- 10.0.19.3 ping statistics ---
6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 102ms
pipe 4
This is the /etc/network/interfaces from one of the hosts:
Code:
auto lo
iface lo inet loopback

iface enp33s0 inet manual

iface enp99s0 inet manual

auto enp99s0.15
iface enp99s0.15 inet static
    address 10.0.15.5/24
#Ceph

auto enp99s0.19
iface enp99s0.19 inet static
    address 10.0.19.5/24
#Corosync Heartbeat

auto vmbr0
iface vmbr0 inet static
    address 10.0.12.5/23
    gateway 10.0.12.1
    bridge-ports enp33s0
    bridge-stp off
    bridge-fd 0
Is there something wrong in the configuration above? Or any ideas how to get this to work, and create the Proxmox cluster?
 
Last edited:

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,592
375
103
Hi victorhooi,

it is not recommended to use the same physical network interface for corosync and ceph. This will end in service interruption.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!