I am facing this issue in multiple nodes.
Node shows grey in cluster, but ssh works and *the proxy service it is not active*
When I start proxy i get the message
"start failed - can't aquire lock '/var/run/pveproxy/pveproxy.pid.lock' - Resource temporarily unavailable"
Only solution is to...
I have all my nodes connected to the same 1000 mbps port Cisco switch.
Each node has 10 guests each (both LXC and KVM)
Corosync have a hardcoded limit of 32 nodes max per cluster.
What will be a practical limit in my case?
10 nodes will work fine for a cluster?
1. I have a cluster with 2 nodes. Each node has 5 CTs each. Proxmox is version 5.1.46.
2. Each node has two drives with ZFS, first one holds Proxmox (local and local-zfs). Other drive is attached as data to hold the CTs.
3. I take backup of /etc/pve/lxc/ folder from both nodes to another...
We have a very serious issue with Proxmox 5.1.46 LXC cluster with ZFS and we need urgent help.
When somebody stops a LXC container and restart it will not restart but gives the following error.
Job for firstname.lastname@example.org failed because the control process exited with error code...