Node fencing

ddmrtc

New Member
Oct 4, 2017
5
0
1
38
Hello,

I have question, where can I find logs why node has been fenced?

We have issue
Code:
Oct  4 14:28:01 ti-pve-131 pmxcfs[5718]: [status] notice: received log
Oct  4 14:28:05 ti-pve-131 pmxcfs[5718]: [status] notice: received log
Oct  4 14:28:13 ti-pve-131 pmxcfs[5718]: [status] notice: received log
Oct  4 14:28:17 ti-pve-131 pmxcfs[5718]: [status] notice: received log
Oct  4 14:28:21 ti-pve-131 pmxcfs[5718]: [status] notice: received log
Oct  4 14:28:26 ti-pve-131 pmxcfs[5718]: [status] notice: received log
Oct  4 14:28:28 ti-pve-131 pmxcfs[5718]: [dcdb] notice: members: 1/1908, 3/1891, 4/1904, 5/1836, 6/1818, 7/1818, 8/1808, 9/1763, 10/1816, 11/7567, 12/1814, 13/5718, 14/5718, 15/5766, 16/7239, 17/2947, 18/1768, 19/10719
Oct  4 14:28:28 ti-pve-131 pmxcfs[5718]: [dcdb] notice: starting data syncronisation
Oct  4 14:28:28 ti-pve-131 pmxcfs[5718]: [status] notice: members: 1/1908, 3/1891, 4/1904, 5/1836, 6/1818, 7/1818, 8/1808, 9/1763, 10/1816, 11/7567, 12/1814, 13/5718, 14/5718, 15/5766, 16/7239, 17/2947, 18/1768, 19/10719
Oct  4 14:28:28 ti-pve-131 pmxcfs[5718]: [status] notice: starting data syncronisation
Oct  4 14:28:28 ti-pve-131 pmxcfs[5718]: [dcdb] notice: received sync request (epoch 1/1908/00000032)
Oct  4 14:28:28 ti-pve-131 pmxcfs[5718]: [status] notice: received sync request (epoch 1/1908/00000032)
Oct  4 14:28:28 ti-pve-131 kernel: [4943111.248089] cfs_loop[5720]: segfault at 7fe94094fccc ip 000000000041add0 sp 00007fe93cbac428 error 4 in pmxcfs[400000+29000]
Oct  4 14:28:29 ti-pve-131 systemd[1]: pve-cluster.service: main process exited, code=killed, status=11/SEGV
Oct  4 14:28:29 ti-pve-131 systemd[1]: Unit pve-cluster.service entered failed state.
Oct  4 14:28:29 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Transport endpoint is not connected
Oct  4 14:28:29 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:29 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:30 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:30 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:30 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:30 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:31 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Transport endpoint is not connected
Oct  4 14:28:31 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:31 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:31 ti-pve-131 pve-ha-lrm[5931]: lost lock 'ha_agent_ti-pve-131_lock - can't create '/etc/pve/priv/lock' (pmxcfs not mounted?)
Oct  4 14:28:32 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Transport endpoint is not connected
Oct  4 14:28:33 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pve-ha-crm[5899]: status change slave => wait_for_quorum
Oct  4 14:28:33 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Transport endpoint is not connected
Oct  4 14:28:33 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:33 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:36 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:36 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:36 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:36 ti-pve-131 pve-ha-lrm[5931]: status change active => lost_agent_lock
Oct  4 14:28:37 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:37 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:37 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:37 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:38 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:38 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:38 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:39 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:39 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:39 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:40 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:40 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:40 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:40 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:41 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:41 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:41 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:42 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:42 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:42 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:43 ti-pve-131 pveproxy[18315]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:45 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Transport endpoint is not connected
Oct  4 14:28:45 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:45 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:46 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:46 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:46 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:46 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:46 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:46 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:46 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:48 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:48 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:48 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:48 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:48 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:48 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:49 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:49 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:49 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:49 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:51 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:51 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:51 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:51 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:51 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:51 ti-pve-131 pve-ha-lrm[5931]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:52 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:52 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:52 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:52 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pve-ha-crm[5899]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:53 ti-pve-131 pvestatd[30264]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:54 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:54 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:54 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:55 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:55 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:55 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused
Oct  4 14:28:55 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused




Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Started udev Coldplug all Devices.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting udev Wait for Complete Device Initialization...
Oct  4 14:32:59 ti-pve-131 systemd-modules-load[735]: Module 'fuse' is builtin
Oct  4 14:32:59 ti-pve-131 systemd[1]: Started Create Static Device Nodes in /dev.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting udev Kernel Device Manager...
Oct  4 14:32:59 ti-pve-131 systemd-modules-load[735]: Inserted module 'vhost_net'
Oct  4 14:32:59 ti-pve-131 systemd[1]: Started Load Kernel Modules.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Started LSB: Create aliases for SCSI devices under /dev/scsi.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Started udev Kernel Device Manager.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LSB: Set preliminary keymap...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Mounted FUSE Control File System.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Started LSB: Tune IDE hard disks.
Oct  4 14:32:59 ti-pve-131 hdparm[764]: Setting parameters of disc: (none).
Oct  4 14:32:59 ti-pve-131 systemd[1]: Started Apply Kernel Variables.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting system-lvm2\x2dpvscan.slice.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 67:32...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 67:128...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 67:112...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 67:160...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 67:48...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 67:16...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 67:80...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 67:64...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 66:224...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 66:240...
Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting LVM2 PV scan on device 66:192...
Oct  4 14:32:59 ti-pve-131 kernel: [    0.000000] Initializing cgroup subsys cpuset
Oct  4 14:32:59 ti-pve-131 kernel: [    0.000000] Initializing cgroup subsys cpu

and following is the error log of one node. actually, all the nodes encountered same error at the same time, and pmxcfs daemon is killed by the kernel.
Code:
Oct  4 14:28:28 ti-pve-131 kernel: [4943111.248089] cfs_loop[5720]: segfault at 7fe94094fccc ip 000000000041add0 sp 00007fe93cbac428 error 4 in pmxcfs[400000+29000]
Oct  4 14:28:29 ti-pve-131 systemd[1]: pve-cluster.service: main process exited, code=killed, status=11/SEGV

and then all nodes except 1 has been fenced.
Code:
Oct  4 14:28:55 ti-pve-131 pveproxy[59093]: ipcc_send_rec failed: Connection refused




Oct  4 14:32:59 ti-pve-131 systemd[1]: Starting Create Static Device Nodes in /dev...

Where can I found logs with the reason for this? And why only one is alive?
 
Hi,

You see in the syslog that you have "Connections refused"
this indicates a network problem.

Do you have a separat corosync Network?
 
I don't have separate corosync network. From my point of view, it isn't network problem. Network problem usually is "connection timeout". So, if all pve-cluster.service are down, connection refuse is expected behavior.
 
Code:
proxmox-ve: 4.4-92 (running kernel: 4.4.67-1-pve)
pve-manager: 4.4-15 (running version: 4.4-15/7599e35a)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.67-1-pve: 4.4.67-92
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-52
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-95
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-100
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
 
Thx, will do, but what's about fencing? where is logs? or where can I read about how is works?

Because I did't find stable reproducing of node fencing.

And one more issue after that, I faced with split brain behavior

1 node didn't fence yesterday. I didn't reboot this node, but pve-cluster is up on this node and few virtual machines are running on this host as well as these virtual machines are running on other nodes in the cluster.
qm list is empty

Code:
ps -ef | grep 'kvm -id'
root       318     1  5 Sep14 ?        1-05:25:57 /usr/bin/kvm -id 375 -chardev socket,id=qmp,path=/var/run/qemu-server/375.qmp
root       967     1  7 Sep15 ?        1-11:09:15 /usr/bin/kvm -id 381 -chardev socket,id=qmp,path=/var/run/qemu-server/381.qmp
root     11586     1  5 Sep13 ?        1-02:48:50 /usr/bin/kvm -id 356 -chardev socket,id=qmp,path=/var/run/qemu-server/356.qmp
root     11856     1  6 Sep14 ?        1-07:37:27 /usr/bin/kvm -id 379 -chardev socket,id=qmp,path=/var/run/qemu-server/379.qmp
root     26511     1  9 Sep14 ?        2-00:52:04 /usr/bin/kvm -id 374 -chardev socket,id=qmp,path=/var/run/qemu-server/374.qmp
root     26516     1  8 Sep15 ?        1-19:05:30 /usr/bin/kvm -id 376 -chardev socket,id=qmp,path=/var/run/qemu-server/376.qmp
root     32089     1 99 Sep15 ?        35-13:53:03 /usr/bin/kvm -id 239 -chardev socket,id=qmp,path=/var/run/qemu-server/239.qmp
root     35419     1  9 Sep15 ?        1-18:56:02 /usr/bin/kvm -id 251 -chardev socket,id=qmp,path=/var/run/qemu-server/251.qmp
root     36407     1  8 Sep15 ?        1-16:40:55 /usr/bin/kvm -id 306 -chardev socket,id=qmp,path=/var/run/qemu-server/306.qmp
root     37297     1 21 Sep15 ?        4-02:56:11 /usr/bin/kvm -id 317 -chardev socket,id=qmp,path=/var/run/qemu-server/317.qmp
root     48041     1  5 Sep15 ?        1-04:08:49 /usr/bin/kvm -id 387 -chardev socket,id=qmp,path=/var/run/qemu-server/387.qmp
root     49457     1  6 Sep11 ?        1-12:52:06 /usr/bin/kvm -id 250 -chardev socket,id=qmp,path=/var/run/qemu-server/250.qmp
root     61916     1  1 Sep11 ?        09:53:30 /usr/bin/kvm -id 291 -chardev socket,id=qmp,path=/var/run/qemu-server/291.qmp
root     62999     1  5 Sep15 ?        1-03:58:43 /usr/bin/kvm -id 386 -chardev socket,id=qmp,path=/var/run/qemu-server/386.qmp
root     63836     1  7 Sep15 ?        1-10:06:09 /usr/bin/kvm -id 289 -chardev socket,id=qmp,path=/var/run/qemu-server/289.qmp

RAM usage is 41.33% (104.10 GiB of 251.88 GiB) with empty list in web interface
 
No, I don't have. As I understand fenced daemon and fenced log exists only on the old stable Proxmox VE 3.x releases.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!