Seperate corosync network: nodelist or quorum.expected_votes must be configured

7thSon

Active Member
Feb 26, 2018
13
3
43
38
I'm trying to seperate the corosync network in our 3 node cluster (Proxmox 5.3-11) as described in https://pve.proxmox.com/wiki/Separate_Cluster_Network
After copying the new corosync.conf to /etc/corosync/corosync.conf I'm getting the following error:

Code:
Mar 19 10:22:57 proxmox-c1-n3 pmxcfs[3876]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 15)
Mar 19 10:22:57 proxmox-c1-n3 corosync[3956]: notice  [CFG   ] Config reload requested by node 1
Mar 19 10:22:57 proxmox-c1-n3 corosync[3956]:  [CFG   ] Config reload requested by node 1
Mar 19 10:22:57 proxmox-c1-n3 corosync[3956]: crit    [VOTEQ ] configuration error: nodelist or quorum.expected_votes must be configured!
Mar 19 10:22:57 proxmox-c1-n3 corosync[3956]: crit    [VOTEQ ] will continue with current runtime data
Mar 19 10:22:57 proxmox-c1-n3 corosync[3956]:  [VOTEQ ] configuration error: nodelist or quorum.expected_votes must be configured!
Mar 19 10:22:57 proxmox-c1-n3 corosync[3956]:  [VOTEQ ] will continue with current runtime data
Mar 19 10:22:57 proxmox-c1-n3 pmxcfs[3876]: [status] notice: update cluster info (cluster name  proxmox-c1, version = 15)

According to https://pve.proxmox.com/wiki/Separate_Cluster_Network#quorum.expected_votes_must_be_configured thats because entries in the hosts-file(s) are wrong.

My hosts file on all 3 cluster nodes contains:

Code:
10.55.1.1 coro0-proxmox-c1-n1.mydomain.com coro0-proxmox-c1-n1
10.55.1.2 coro0-proxmox-c1-n2.mydomain.com coro0-proxmox-c1-n2
10.55.1.3 coro0-proxmox-c1-n3.mydomain.com coro0-proxmox-c1-n3

And corosync.conf looks like this:

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: proxmox-c1-n1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: coro0-proxmox-c1-n1
  }
  node {
    name: proxmox-c1-n2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: coro0-proxmox-c1-n2
  }
  node {
    name: proxmox-c1-n3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: coro0-proxmox-c1-n3
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: proxmox-c1
  config_version: 15
  interface {
    bindnetaddr: 10.55.1.0
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}


All 3 names (coro0-proxmox-c1-n1, coro0-proxmox-c1-n2, coro0-proxmox-c1-n3) are resolvable on all 3 cluster nodes. I've already tried to use the corresponding IP addresses as ring0_addr's but I'm getting the same error.

Accroding to my research the problem could also be related to multicast problems but testing with omping doesn't show any problems:

Code:
omping -c 10000 -i 0.001 -F -q coro0-proxmox-c1-n1 coro0-proxmox-c1-n2 coro0-proxmox-c1-n3
coro0-proxmox-c1-n2 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.047/0.127/0.283/0.036
coro0-proxmox-c1-n2 : multicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.051/0.134/0.286/0.036
coro0-proxmox-c1-n3 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.049/0.143/0.304/0.042
coro0-proxmox-c1-n3 : multicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.059/0.150/0.307/0.041


Any ideas what I'm doing wrong?
 
Please post your /etc/hosts file and did you increment the 'config_version' after updating everything?
Did you modify /etc/pve/corosync.conf or /etc/corosync/corosync.conf? Please post both of those as well.
 
Please post your /etc/hosts file

/etc/hosts on node1:
Code:
127.0.0.1 localhost.localdomain localhost

172.30.8.41 proxmox-c1-n1.proxmox.mydomain.com proxmox-c1-n1 pvelocalhost
172.30.8.42 proxmox-c1-n2.proxmox.mydomain.com proxmox-c1-n2
172.30.8.43 proxmox-c1-n3.proxmox.mydomain.com proxmox-c1-n3

10.55.1.1 coro0-proxmox-c1-n1.proxmox.mydomain.com coro0-proxmox-c1-n1
10.55.1.2 coro0-proxmox-c1-n2.proxmox.mydomain.com coro0-proxmox-c1-n2
10.55.1.3 coro0-proxmox-c1-n3.proxmox.mydomain.com coro0-proxmox-c1-n3

10.55.2.1 coro1-proxmox-c1-n1.proxmox.mydomain.com coro1-proxmox-c1-n1
10.55.2.2 coro1-proxmox-c1-n2.proxmox.mydomain.com coro1-proxmox-c1-n2
10.55.2.3 coro1-proxmox-c1-n3.proxmox.mydomain.com coro1-proxmox-c1-n3

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

/etc/hosts on node2:
Code:
127.0.0.1 localhost.localdomain localhost

172.30.8.42 proxmox-c1-n2.proxmox.mydomain.com proxmox-c1-n2 pvelocalhost
172.30.8.41 proxmox-c1-n1.proxmox.mydomain.com proxmox-c1-n1
172.30.8.43 proxmox-c1-n3.proxmox.mydomain.com proxmox-c1-n3

10.55.1.2 coro0-proxmox-c1-n2.proxmox.mydomain.com coro0-proxmox-c1-n2
10.55.1.1 coro0-proxmox-c1-n1.proxmox.mydomain.com coro0-proxmox-c1-n1
10.55.1.3 coro0-proxmox-c1-n3.proxmox.mydomain.com coro0-proxmox-c1-n3

10.55.2.2 coro1-proxmox-c1-n2.proxmox.mydomain.com coro1-proxmox-c1-n2
10.55.2.1 coro1-proxmox-c1-n1.proxmox.mydomain.com coro1-proxmox-c1-n1
10.55.2.3 coro1-proxmox-c1-n3.proxmox.mydomain.com coro1-proxmox-c1-n3

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

/etc/hosts on node3:
Code:
127.0.0.1 localhost.localdomain localhost

172.30.8.43 proxmox-c1-n3.proxmox.mydomain.com proxmox-c1-n3 pvelocalhost
172.30.8.41 proxmox-c1-n1.proxmox.mydomain.com proxmox-c1-n1
172.30.8.42 proxmox-c1-n2.proxmox.mydomain.com proxmox-c1-n2

10.55.1.1 coro0-proxmox-c1-n1.proxmox.mydomain.com coro0-proxmox-c1-n1
10.55.1.2 coro0-proxmox-c1-n2.proxmox.mydomain.com coro0-proxmox-c1-n2
10.55.1.3 coro0-proxmox-c1-n3.proxmox.mydomain.com coro0-proxmox-c1-n3

10.55.2.1 coro1-proxmox-c1-n1.proxmox.mydomain.com coro1-proxmox-c1-n1
10.55.2.2 coro1-proxmox-c1-n2.proxmox.mydomain.com coro1-proxmox-c1-n2
10.55.2.3 coro1-proxmox-c1-n3.proxmox.mydomain.com coro1-proxmox-c1-n3

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
(the "coro1"-entries are not yet in use. I plan to add a second corosync ring after the first one workes as expected)


did you increment the 'config_version' after updating everything?
Yes.

Did you modify /etc/pve/corosync.conf or /etc/corosync/corosync.conf? Please post both of those as well.
Yes I have modified /etc/pve/corosync.conf and I've posted /etc/pve/corosync.conf already in my first post.
 
Yes I have modified /etc/pve/corosync.conf and I've posted /etc/pve/corosync.conf already in my first post.
After copying the new corosync.conf to /etc/corosync/corosync.conf I'm getting the following error:

what about the local version under "/etc/corosync/corosync.conf" ? Please copy both current ones here, because corosync error does not makes any sense with the one you posted...
 
Local version of /etc/corosync/corosync.conf (on all 3 nodes) and /etc/pve/corosync.conf all look exactly the same while I never changed anything manually on the local config files in /etc/corosync/corosync.conf
I just moved/copied the modified version of corosync.conf to /etc/pve/corosync.conf (as described in https://pve.proxmox.com/wiki/Separate_Cluster_Network#Configure_corosync)
Right after moving the new config file to /etc/pve/corosync.conf with
Code:
mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
the error I mentioned in my first post is logged:
Code:
Mar 20 10:23:39 pbx-c1-n2 pmxcfs[3071]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 19)
Mar 20 10:23:39 pbx-c1-n2 corosync[3297]: notice  [CFG   ] Config reload requested by node 1
Mar 20 10:23:39 pbx-c1-n2 corosync[3297]:  [CFG   ] Config reload requested by node 1
Mar 20 10:23:39 pbx-c1-n2 corosync[3297]: crit    [VOTEQ ] configuration error: nodelist or quorum.expected_votes must be configured!
Mar 20 10:23:39 pbx-c1-n2 corosync[3297]: crit    [VOTEQ ] will continue with current runtime data
Mar 20 10:23:39 pbx-c1-n2 corosync[3297]:  [VOTEQ ] configuration error: nodelist or quorum.expected_votes must be configured!
Mar 20 10:23:39 pbx-c1-n2 corosync[3297]:  [VOTEQ ] will continue with current runtime data

The content of /etc/corosync/corosync.conf (on all 3 nodes) and of /etc/pve/corosync.conf looks like that:

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: proxmox-c1-n1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: coro0-proxmox-c1-n1
  }
  node {
    name: proxmox-c1-n2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: coro0-proxmox-c1-n2
  }
  node {
    name: proxmox-c1-n3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: coro0-proxmox-c1-n3
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: proxmox-c1
  config_version: 19
  interface {
    bindnetaddr: 10.55.1.0
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

Code:
root@proxmox-c1-n1:~# diff /etc/pve/corosync.conf /etc/corosync/corosync.conf
root@proxmox-c1-n1:~#
 
Can you post the output of 'ip a' of all nodes?
 
Sure:

Node 1:
Code:
root@proxmox-c1-n1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:0a:c3:66 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:0a:c3:66 brd ff:ff:ff:ff:ff:ff
4: enp101s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:15:17:2e:f6:d6 brd ff:ff:ff:ff:ff:ff
    inet 10.55.1.1/24 brd 10.55.1.255 scope global enp101s0f0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:17ff:fe2e:f6d6/64 scope link
       valid_lft forever preferred_lft forever
5: enp101s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:15:17:2e:f6:d7 brd ff:ff:ff:ff:ff:ff
    inet 10.55.2.1/24 brd 10.55.2.255 scope global enp101s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::215:17ff:fe2e:f6d7/64 scope link
       valid_lft forever preferred_lft forever
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0a:c3:66 brd ff:ff:ff:ff:ff:ff
7: bond0.4001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether ac:1f:6b:0a:c3:66 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0a:c3:66 brd ff:ff:ff:ff:ff:ff
    inet 172.30.8.41/24 brd 172.30.8.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::ae1f:6bff:fe0a:c366/64 scope link
       valid_lft forever preferred_lft forever
9: bond0.550@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0a:c3:66 brd ff:ff:ff:ff:ff:ff
    inet 10.55.0.1/24 brd 10.55.0.255 scope global bond0.550
       valid_lft forever preferred_lft forever
    inet6 fe80::ae1f:6bff:fe0a:c366/64 scope link
       valid_lft forever preferred_lft forever
10: bond0.551@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether ac:1f:6b:0a:c3:66 brd ff:ff:ff:ff:ff:ff
11: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0a:c3:66 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ae1f:6bff:fe0a:c366/64 scope link
       valid_lft forever preferred_lft forever

Node 2:
Code:
root@proxmox-c1-n2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:30 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:30 brd ff:ff:ff:ff:ff:ff
4: enp101s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:15:17:78:7a:ce brd ff:ff:ff:ff:ff:ff
    inet 10.55.1.2/24 brd 10.55.1.255 scope global enp101s0f0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:17ff:fe78:7ace/64 scope link
       valid_lft forever preferred_lft forever
5: enp101s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:15:17:78:7a:cf brd ff:ff:ff:ff:ff:ff
    inet 10.55.2.2/24 brd 10.55.2.255 scope global enp101s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::215:17ff:fe78:7acf/64 scope link
       valid_lft forever preferred_lft forever
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:30 brd ff:ff:ff:ff:ff:ff
7: bond0.4001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:30 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:30 brd ff:ff:ff:ff:ff:ff
    inet 172.30.8.42/24 brd 172.30.8.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::ae1f:6bff:fe0c:230/64 scope link
       valid_lft forever preferred_lft forever
9: bond0.550@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:30 brd ff:ff:ff:ff:ff:ff
    inet 10.55.0.2/24 brd 10.55.0.255 scope global bond0.550
       valid_lft forever preferred_lft forever
    inet6 fe80::ae1f:6bff:fe0c:230/64 scope link
       valid_lft forever preferred_lft forever
10: bond0.551@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:30 brd ff:ff:ff:ff:ff:ff
11: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:30 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ae1f:6bff:fe0c:230/64 scope link
       valid_lft forever preferred_lft forever

Node3:
Code:
root@proxmox-c1-n3:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:7a brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:7a brd ff:ff:ff:ff:ff:ff
4: enp101s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:15:17:78:77:28 brd ff:ff:ff:ff:ff:ff
    inet 10.55.1.3/24 brd 10.55.1.255 scope global enp101s0f0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:17ff:fe78:7728/64 scope link
       valid_lft forever preferred_lft forever
5: enp101s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:15:17:78:77:29 brd ff:ff:ff:ff:ff:ff
    inet 10.55.2.3/24 brd 10.55.2.255 scope global enp101s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::215:17ff:fe78:7729/64 scope link
       valid_lft forever preferred_lft forever
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:7a brd ff:ff:ff:ff:ff:ff
7: bond0.4001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:7a brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:7a brd ff:ff:ff:ff:ff:ff
    inet 172.30.8.43/24 brd 172.30.8.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::ae1f:6bff:fe0c:27a/64 scope link
       valid_lft forever preferred_lft forever
9: bond0.550@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:7a brd ff:ff:ff:ff:ff:ff
    inet 10.55.0.3/24 brd 10.55.0.255 scope global bond0.550
       valid_lft forever preferred_lft forever
    inet6 fe80::ae1f:6bff:fe0c:27a/64 scope link
       valid_lft forever preferred_lft forever
10: bond0.551@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:7a brd ff:ff:ff:ff:ff:ff
11: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:0c:02:7a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ae1f:6bff:fe0c:27a/64 scope link
       valid_lft forever preferred_lft forever

As you'll see we're using bonding and vlans on some interfaces but not on the corosync interfaces enp101s0f0 and enp101s0f1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!