Proxmox 6.2-4 cluster die!!!Node auto reboot!! need help!!

lazypaul

Member
Aug 20, 2020
47
1
8
39
My cluserter have 33 nodes with ceph, cluster will reboot randomly by some of below operation:
1. systemctl restart corosync
2. add a new node into cluster
3. reboot one of the node

How to stop server reboot automatic ? ????? This is a production environment, I really have no idea.

What I had done is :
1. remove all ha group, I heard that ha could reboot server----HA service still ruuning
2. make corosync totek to 10000 ms, it seems no use!


What I want is no reboot any more, I can accept no HA, but no server auto reboot!


My server is : HP Dl380gen8, Gen9, IBM x3650 m4 with 10G sfp+ network
ceph: 269 osd, Data over 100TB.
Switch: cisco 4506 all link to server with single trunk port


I also have another VSAN cluster in the same switches, and also a single 10G sfp+ network, and same server. Network is stable, I am sure!


pvecm status

root@g8kvm04:~# pvecm status
Cluster information
-------------------
Name: AW-G8-KVM
Config Version: 40
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Wed Aug 19 21:42:12 2020
Quorum provider: corosync_votequorum
Nodes: 33
Node ID: 0x00000002
Ring ID: 1.1f38
Quorate: Yes

Votequorum information
----------------------
Expected votes: 33
Highest expected: 33
Total votes: 33
Quorum: 17
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.0.141.1
0x00000002 1 10.0.141.2 (local)
0x00000003 1 10.0.141.5
0x00000004 1 10.0.141.6
0x00000005 1 10.0.141.3
0x00000006 1 10.0.141.4
0x00000007 1 10.0.141.7
0x00000008 1 10.0.141.21
0x00000009 1 10.0.141.22
0x0000000a 1 10.0.141.23
0x0000000b 1 10.0.141.24
0x0000000c 1 10.0.141.8
0x0000000d 1 10.0.141.25
0x0000000e 1 10.0.141.26
0x0000000f 1 10.0.141.31
0x00000010 1 10.0.141.9
0x00000011 1 10.0.141.10
0x00000012 1 10.0.141.27
0x00000013 1 10.0.141.28
0x00000014 1 10.0.141.29
0x00000015 1 10.0.141.16
0x00000016 1 10.0.141.18
0x00000017 1 10.0.141.20
0x00000018 1 10.0.141.17
0x00000019 1 10.0.141.19
0x0000001a 1 10.0.141.15
0x0000001b 1 10.0.141.14
0x0000001c 1 10.0.141.32
0x0000001d 1 10.0.141.13
0x0000001e 1 10.0.141.30
0x0000001f 1 10.0.141.33
0x00000020 1 10.0.141.11
0x00000021 1 10.0.141.12

 
journalctl -b -u corosync -u pve-cluster

root@g8kvm01:~# journalctl -b -u corosync -u pve-cluster
-- Logs begin at Wed 2020-08-19 21:02:10 CST, end at Thu 2020-08-20 10:16:28 CST. --
Aug 19 21:02:16 g8kvm01 systemd[1]: Starting The Proxmox VE cluster filesystem...
Aug 19 21:02:16 g8kvm01 pmxcfs[1799]: [quorum] crit: quorum_initialize failed: 2
Aug 19 21:02:16 g8kvm01 pmxcfs[1799]: [quorum] crit: can't initialize service
Aug 19 21:02:16 g8kvm01 pmxcfs[1799]: [confdb] crit: cmap_initialize failed: 2
Aug 19 21:02:16 g8kvm01 pmxcfs[1799]: [confdb] crit: can't initialize service
Aug 19 21:02:16 g8kvm01 pmxcfs[1799]: [dcdb] crit: cpg_initialize failed: 2
Aug 19 21:02:16 g8kvm01 pmxcfs[1799]: [dcdb] crit: can't initialize service
Aug 19 21:02:16 g8kvm01 pmxcfs[1799]: [status] crit: cpg_initialize failed: 2
Aug 19 21:02:16 g8kvm01 pmxcfs[1799]: [status] crit: can't initialize service
Aug 19 21:02:17 g8kvm01 systemd[1]: Started The Proxmox VE cluster filesystem.
Aug 19 21:02:17 g8kvm01 systemd[1]: Starting Corosync Cluster Engine...
Aug 19 21:02:18 g8kvm01 corosync[1815]: [MAIN ] Corosync Cluster Engine 3.0.3 starting up
Aug 19 21:02:18 g8kvm01 corosync[1815]: [MAIN ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf snmp pie relro bindnow
Aug 19 21:02:18 g8kvm01 corosync[1815]: [TOTEM ] Initializing transport (Kronosnet).
Aug 19 21:02:18 g8kvm01 corosync[1815]: [TOTEM ] kronosnet crypto initialized: aes256/sha256
Aug 19 21:02:18 g8kvm01 corosync[1815]: [TOTEM ] totemknet initialized
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so
Aug 19 21:02:18 g8kvm01 corosync[1815]: [SERV ] Service engine loaded: corosync configuration map access [0]
Aug 19 21:02:18 g8kvm01 corosync[1815]: [QB ] server name: cmap
Aug 19 21:02:18 g8kvm01 corosync[1815]: [SERV ] Service engine loaded: corosync configuration service [1]
Aug 19 21:02:18 g8kvm01 corosync[1815]: [QB ] server name: cfg
Aug 19 21:02:18 g8kvm01 corosync[1815]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Aug 19 21:02:18 g8kvm01 corosync[1815]: [QB ] server name: cpg
Aug 19 21:02:18 g8kvm01 corosync[1815]: [SERV ] Service engine loaded: corosync profile loading service [4]
Aug 19 21:02:18 g8kvm01 corosync[1815]: [SERV ] Service engine loaded: corosync resource monitoring service [6]
Aug 19 21:02:18 g8kvm01 corosync[1815]: [WD ] Watchdog not enabled by configuration
Aug 19 21:02:18 g8kvm01 corosync[1815]: [WD ] resource load_15min missing a recovery key.
Aug 19 21:02:18 g8kvm01 corosync[1815]: [WD ] resource memory_used missing a recovery key.
Aug 19 21:02:18 g8kvm01 corosync[1815]: [WD ] no resources configured.
Aug 19 21:02:18 g8kvm01 corosync[1815]: [SERV ] Service engine loaded: corosync watchdog service [7]
Aug 19 21:02:18 g8kvm01 corosync[1815]: [QUORUM] Using quorum provider corosync_votequorum
Aug 19 21:02:18 g8kvm01 corosync[1815]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
Aug 19 21:02:18 g8kvm01 corosync[1815]: [QB ] server name: votequorum
Aug 19 21:02:18 g8kvm01 corosync[1815]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Aug 19 21:02:18 g8kvm01 corosync[1815]: [QB ] server name: quorum
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 0)
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 1 has no active links
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 6 (passive) best link: 0 (pri: 0)
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 6 has no active links
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 6 (passive) best link: 0 (pri: 1)
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 6 has no active links
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 6 (passive) best link: 0 (pri: 1)
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 6 has no active links
Aug 19 21:02:18 g8kvm01 corosync[1815]: [TOTEM ] A new membership (1.1f09) was formed. Members joined: 1
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 32 (passive) best link: 0 (pri: 0)
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 32 has no active links
Aug 19 21:02:18 g8kvm01 corosync[1815]: [KNET ] host: host: 32 (passive) best link: 0 (pri: 1)






Reboot times:

permitted by applicable law.
Last login: Wed Aug 19 21:12:14 2020
root@g8kvm01:~# last | grep -i boot
reboot system boot 5.4.34-1-pve Wed Aug 19 21:02 still running
reboot system boot 5.4.34-1-pve Wed Aug 19 20:27 - 20:58 (00:31)
reboot system boot 5.4.34-1-pve Wed Aug 19 17:45 - 20:58 (03:13)
reboot system boot 5.4.34-1-pve Tue Aug 18 12:37 - 20:58 (1+08:21)
reboot system boot 5.4.34-1-pve Tue Aug 18 12:10 - 20:58 (1+08:48)
reboot system boot 5.4.34-1-pve Tue Aug 18 11:25 - 20:58 (1+09:33)
reboot system boot 5.4.34-1-pve Tue Aug 18 10:30 - 11:22 (00:51)
reboot system boot 5.4.34-1-pve Tue Aug 18 09:56 - 10:27 (00:31)
reboot system boot 5.4.34-1-pve Tue Aug 18 00:32 - 10:27 (09:54)
reboot system boot 5.4.34-1-pve Mon Aug 17 19:20 - 10:27 (15:07)
reboot system boot 5.4.34-1-pve Fri Aug 7 14:30 - 10:27 (10+19:56)
reboot system boot 5.4.34-1-pve Thu Aug 6 13:49 - 14:27 (1+00:38)
reboot system boot 5.4.34-1-pve Sun Jul 26 14:53 - 13:46 (10+22:52)
reboot system boot 5.4.34-1-pve Wed Jun 24 23:53 - 13:46 (42+13:52)
reboot system boot 5.4.34-1-pve Wed Jun 24 22:58 - 23:50 (00:51)
reboot system boot 5.4.34-1-pve Thu Jun 25 05:43 - 22:55 (-6:48)
root@g8kvm01:~#
 
pveversion -v
root@g8kvm01:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 14.2.10-pve1
ceph-fuse: 14.2.10-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

 
Hi, I had this bug once, but with an older vers of corosync3, after the reboot of 1 node. (I don't use HA so no reboot, but I needed to stop corosync everywhere, then start it again node by node)

33 nodes is already a lot of nodes. (I never tested more than 20nodes and with some corosync tuning).


is corosync/libknet at same version on all nodes?

Proxmox provide libknet 1.16 with some new fixes
Maybe can you try to upgrade libknet ?

(and you need to manually restart corosync after libknet upgrade, it's not done auto)

can you share your /etc/pve/corosync.conf


About reboot, HA should not reboot host if you don't have any HA VM.

but to be sure, you can stop

systemctl stop pve-ha-lrm on all nodes

then

systemctl stop pve-ha-crm on all nodes
 
Yes, all corosync/libknet at same version on all nodes, they all install from a same ISO.
Now, I have stop the HA.

you mean libknet can make the network more stable? My cluster is not reboot, it will grow to over 100 nodes cluster, I am going to make all my sever into proxmox to instead Vmware VSAN cluster.


I just change the toten to 20000ms


root@g8kvm01:~# cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: g8kvm01
nodeid: 1
quorum_votes: 1
ring0_addr: 10.0.141.1
}
node {
name: g8kvm02
nodeid: 6
quorum_votes: 1
ring0_addr: 10.0.141.4
}
node {
name: g8kvm03
nodeid: 5
quorum_votes: 1
ring0_addr: 10.0.141.3
}
node {
name: g8kvm04
nodeid: 2
quorum_votes: 1
ring0_addr: 10.0.141.2
}
node {
name: g8kvm05
nodeid: 3
quorum_votes: 1
ring0_addr: 10.0.141.5
}
node {
name: g8kvm06
nodeid: 4
quorum_votes: 1
ring0_addr: 10.0.141.6
}
node {
name: g8kvm07
nodeid: 7
quorum_votes: 1
ring0_addr: 10.0.141.7
}
node {
name: g8kvm08
nodeid: 12
quorum_votes: 1
ring0_addr: 10.0.141.8
}
node {
name: g8kvm09
nodeid: 16
quorum_votes: 1
ring0_addr: 10.0.141.9
}
node {
name: g8kvm10
nodeid: 17
quorum_votes: 1
ring0_addr: 10.0.141.10
}
node {
name: g8kvm11
nodeid: 32
quorum_votes: 1
ring0_addr: 10.0.141.11
}
node {
name: g8kvm12
nodeid: 33
quorum_votes: 1
ring0_addr: 10.0.141.12
}
node {
name: g8kvm13
nodeid: 29
quorum_votes: 1
ring0_addr: 10.0.141.13
}
node {
name: g8kvm14
nodeid: 27
quorum_votes: 1
ring0_addr: 10.0.141.14
}
node {
name: g8kvm15
nodeid: 26
quorum_votes: 1
ring0_addr: 10.0.141.15
}
node {
name: g8kvm16
nodeid: 21
quorum_votes: 1
ring0_addr: 10.0.141.16
}
node {
name: g8kvm17
nodeid: 24
quorum_votes: 1
ring0_addr: 10.0.141.17
}
node {
name: g8kvm18
nodeid: 22
quorum_votes: 1
ring0_addr: 10.0.141.18
}
node {
name: g8kvm19
nodeid: 25
quorum_votes: 1
ring0_addr: 10.0.141.19
}
node {
name: g8kvm20
nodeid: 23
quorum_votes: 1
ring0_addr: 10.0.141.20
}
node {
name: g8kvm21
nodeid: 8
quorum_votes: 1
ring0_addr: 10.0.141.21
}
node {
name: g8kvm22
nodeid: 9
quorum_votes: 1
ring0_addr: 10.0.141.22
}
node {
name: g8kvm23
nodeid: 10
quorum_votes: 1
ring0_addr: 10.0.141.23
}
node {
name: g8kvm24
nodeid: 11
quorum_votes: 1
ring0_addr: 10.0.141.24
}
node {
name: g8kvm25
nodeid: 13
quorum_votes: 1
ring0_addr: 10.0.141.25
}
node {
name: g8kvm26
nodeid: 14
quorum_votes: 1
ring0_addr: 10.0.141.26
}
node {
name: g8kvm27
nodeid: 18
quorum_votes: 1
ring0_addr: 10.0.141.27
}
node {
name: g8kvm28
nodeid: 19
quorum_votes: 1
ring0_addr: 10.0.141.28
}
node {
name: g8kvm29
nodeid: 20
quorum_votes: 1
ring0_addr: 10.0.141.29
}
node {
name: g8kvm30
nodeid: 30
quorum_votes: 1
ring0_addr: 10.0.141.30
}
node {
name: g8kvm31
nodeid: 15
quorum_votes: 1
ring0_addr: 10.0.141.31
}
node {
name: g8kvm32
nodeid: 28
quorum_votes: 1
ring0_addr: 10.0.141.32
}
node {
name: g8kvm33
nodeid: 31
quorum_votes: 1
ring0_addr: 10.0.141.33
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: AW-G8-KVM
config_version: 40
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
token: 20000
}

 
My cluster is not reboot, it will grow to over 100 nodes cluster,
I'm not sure that corosync can handle soo much nodes in 1 cluster. (you need switchs with low latency, and maybe need to tune token, but I'm not an expert with corosync tuning, maybe ask to proxmox devs or corosync github issue for some tuning advise )

Personnaly, I only enable HA after 1 or 2 months after building or extending the cluster. (Need to look at corosync log if you don't have retransmit).
But I'm doing multiple clusters with around 20nodes.

If your cluster is really stable, and that the problem only happen on node reboot, yes, maybe it's a corosync bug or libknet bug. (libknet is the new protocol lib used by corosync).

They are a new version of corosync 3.0.4 not yet availalble in proxmox, I'm seeing some quorum fixes, but I don't have seen any bug report for the reboot bug.

BTW, do you use bonding for your physical nics ? because with only 1 corosync link, any switch failure will break the cluster (and reboot if HA is enabled).
 
I'm not sure that corosync can handle soo much nodes in 1 cluster. (you need switchs with low latency, and maybe need to tune token, but I'm not an expert with corosync tuning, maybe ask to proxmox devs or corosync github issue for some tuning advise )

Personnaly, I only enable HA after 1 or 2 months after building or extending the cluster. (Need to look at corosync log if you don't have retransmit).
But I'm doing multiple clusters with around 20nodes.

If your cluster is really stable, and that the problem only happen on node reboot, yes, maybe it's a corosync bug or libknet bug. (libknet is the new protocol lib used by corosync).

They are a new version of corosync 3.0.4 not yet availalble in proxmox, I'm seeing some quorum fixes, but I don't have seen any bug report for the reboot bug.

BTW, do you use bonding for your physical nics ? because with only 1 corosync link, any switch failure will break the cluster (and reboot if HA is enabled).



I don't expect any more except the cluster not reboot just ok.


Is it the corosync is work for proxmox cluster data sync ? And corosync is not any connect with HA ?
so, if I stop HA, can slove my reboot problem ?


Actually, I have two cluster, onther cluster is 18 node with a new model server HP DL380 Gen9, But with a old version 6.1-7.

Cluster1: 33 node now Hp dl380 gen8, ibm x3650 m4, proxmox install in the usb stick for in HBA mode server can't boot
Cluster2: 18 node now HP Dl380 gen9 server install in the sda


They all only have a 10G sfp+ network as same as my 2 Vmware Vsan cluster: vsan01 has 18 nodes, vsan02 has 27 nodes.


Vsan is ok with 1 network, in the same switch, stable over 900 days. But proxmox is not , reboot randomly.

So far I remove HA groups, resources, to see whether would be reboot next week.
 
Is it the corosync is work for proxmox cluster data sync ? And corosync is not any connect with HA ?
so, if I stop HA, can slove my reboot problem ?
corosync manage the quorum && the replication of /etc/pve.
HA use corosync to known if the quorum is ok. the HA reboot the node, if the quorum is lost for this node. (the node see less than (31/2+1 nodes)
So you can disable HA, no problem.


Cluster1: 33 node now Hp dl380 gen8, ibm x3650 m4, proxmox install in the usb stick for in HBA mode server can't boot
I really can't recommend to install promox on a usb stick, because it's not stateless like vmware. (you have a lot of small write for vm stats, node stats, graph, corosync,....). Last time I have tried, I have burn them after 2-3months, but this was with a cheap usb key. (maybe they exist some kind of ssd flash usb key with good endurance ? I really don't known, but be carefull).
Also for your ceph, don't install monitor on a usb-key. (you should have some kind of raid1 for ceph monitor, it's really painfull to reinstall)

Vsan is ok with 1 network, in the same switch, stable over 900 days. But proxmox is not , reboot randomly.
yes, I don't think it's a network problem. (but please don't enable HA if you have only 1 switch)
 
corosync manage the quorum && the replication of /etc/pve.
HA use corosync to known if the quorum is ok. the HA reboot the node, if the quorum is lost for this node. (the node see less than (31/2+1 nodes)
So you can disable HA, no problem.




I really can't recommend to install promox on a usb stick, because it's not stateless like vmware. (you have a lot of small write for vm stats, node stats, graph, corosync,....). Last time I have tried, I have burn them after 2-3months, but this was with a cheap usb key. (maybe they exist some kind of ssd flash usb key with good endurance ? I really don't known, but be carefull).
Also for your ceph, don't install monitor on a usb-key. (you should have some kind of raid1 for ceph monitor, it's really painfull to reinstall)


yes, I don't think it's a network problemext3.png. (but please don't enable HA if you have only 1 switch)


Really thank you!


Why I am using a usb key for I can't boot the system for HP DL380 Gen8 in HBA mode can not start with one of the harddisk.
I am using EXT3, make swap 0. So I think this could make the write less


In my cluster I make over 5 ceph mon, I think if no luck, 2 node die, my data would lost.
 
Hi, I had this bug once, but with an older vers of corosync3, after the reboot of 1 node. (I don't use HA so no reboot, but I needed to stop corosync everywhere, then start it again node by node)

33 nodes is already a lot of nodes. (I never tested more than 20nodes and with some corosync tuning).


is corosync/libknet at same version on all nodes?

Proxmox provide libknet 1.16 with some new fixes
Maybe can you try to upgrade libknet ?

(and you need to manually restart corosync after libknet upgrade, it's not done auto)

can you share your /etc/pve/corosync.conf


About reboot, HA should not reboot host if you don't have any HA VM.

but to be sure, you can stop

systemctl stop pve-ha-lrm on all nodes

then

systemctl stop pve-ha-crm on all nodes
Hi, kindly come to my messages, cause you offered help and I wanted not to test this what I ve setup around and have all breakdown again... My forumpost was: Come & Help Please

And sorry to the OP for intervention on your thread. :D
 
Why I am using a usb key for I can't boot the system for HP DL380 Gen8 in HBA mode can not start with one of the harddisk.
maybe this kind of usbkey with mlc memory is ok
https://www.kiwi-electronics.nl/32gb-transcend-jetflash-780-usb-30-flash-drive-mlc-210mbs?lang=en
https://www.transcend-info.com/Embedded/Products/No-1150


I am using EXT3, make swap 0. So I think this could make the write less
yes, you really don't want to swap on it ;) but even without swap, proxmox will write in continue vm stats...(that's not big, but a lot of small writes, and you can't avoid that

In my cluster I make over 5 ceph mon, I think if no luck, 2 node die, my data would lost.
with 5 mons, you can loose 2 without any impact.
 
maybe this kind of usbkey with mlc memory is ok
https://www.kiwi-electronics.nl/32gb-transcend-jetflash-780-usb-30-flash-drive-mlc-210mbs?lang=en
https://www.transcend-info.com/Embedded/Products/No-1150



yes, you really don't want to swap on it ;) but even without swap, proxmox will write in continue vm stats...(that's not big, but a lot of small writes, and you can't avoid that


with 5 mons, you can loose 2 without any impact.




I'm using the sandisk common usb 3.0 32G u key:oops:. I want to prepare to add/delete node form cluster, in cases to key data save and make a backup cluster to backup data.

Is there necessary to backup proxmox ? Like data/config/database ext. And is ceph config need to backup ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!