Recently we faced some critical issues with the proxmox cluster, the cluster is set up with 3 Nodes
*) Migrating VMs between the nodes took a long time and sometimes proxmox GUI went down while the migration, also, VMs will be in lock even after the migration...
Hi, on one of the ceph cluster nodes a message appeared: 1 osds down, it appears and then disappears, the status constantly changes from down to up, what can be done about it?
The SMART status of the disk shows OK.
proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve)
I do not know why, but the container is not starting. There was no problem last night, I woke up this morning and saw the container is down
Job for email@example.com failed because the control process exited with error code.
See "systemctl status...
I have a cluster with 3 PCs, each having 3 osd's.
It works great so far, but over time (after a few hours / days) the osd's start to go down.
The cluster has been in use for about 4 weeks and roughly I lose one osd per day.
After it's down it cannot be restarted via ceph commands.
Hello. I have had a weird issue with my Proxmox 4.4 VE installation. This issue has happened twice and is very annoying.
All of my VM's seems to STOP and they are not turned back ON until I do so. The log says : Start all VM and containers , which outputs in : Status : Stopped: OK .
We have 2 nodes in a cluster. After running upgrade we received message:
Setting up pve-manager (4.4-5) ...
Job for pvedaemon.service failed. See 'systemctl status pvedaemon.service' and 'journalctl -xn' for details.
dpkg: error processing package pve-manager (--configure):
On my proxmox cluster, every nodes were red in the webUI, like if something went wrong with multicast, anyway every VMs were OK.
I recovered the cluster like this on each server :
killall -9 corosync
systemctl restart pve-cluster
systemctl restart pvedaemon
systemctl restart pvestatsd