yes, I have figured this out. Since me posting here, the system is running in production non-stop. Never had any issues or any drives fail. I just ordered new SSD's to replace the Samsung 840 Pro's with Intel 3700. This is more a precaution, since the drives have been in full time operation...
thanks for your reply. After reading, I noticed that I posted in the wrong tread and this one is from 2015.
I mend to post in this thread: https://forum.proxmox.com/threads/node-with-question-mark.41180/
Is is possible to move my message over, or should I just copy paste?
I forced the re-installation of proxmox-ve and the 3 separate packages you pointed out. The installation worked without a hiccup, but the noVNC still is not working.
I am aware that 4.4 is EOL, but after upgrading my original 4.4 install to 5.3, I experienced the issues described in this...
I have the same issue as OP. Here is my experience, hope this can be resolved soon, because its highly frustrating.
I was running proxmox 4.4, and wanted to update to version 5.3 (single node, no cluster). Doing an 'apt dist-upgrade' (and fixing some issues on the way), I ended up with...
I have reinstalled proxmox4.4 (was on 5.3 but experienced the issues described here: https://forum.proxmox.com/threads/node-with-question-mark.41180/. 5.3 was a fresh install.).
Everything is working fine on 4.4, but noVNC does not open properly. Please see attached picture.
I have proxmox 4.1 (initial install was 4.0) on a pair of SSD's in mirror using ZFS.
I want to do a snapshot of rpool and than send/receive the snapshot for the purpose of backup.
For reasons unknown (and I am by far no expert), I can create the snapshot (at least its listed) but...
The syslog does not show. Is there a way for me to check, or to change the level of logging?
I also have the issue, that the server reboots, without any notice or warning (maybe for another thread), but changing the level of logging to log more details, might help to find the ipcc problem and...
I still have the same error as before. Any ideas?
The VM conf:
root@VMNode01:/etc/pve/nodes/VMNode01/qemu-server# cat 100.conf
#Main Server running Windows Small Business 2011 Std.
#Server has AD, DHCP, DNS roles.
I highly doubt that its a network issue. Everything was working just fine, until one VM acted up and I had to hard STOP it (described it above).
After that, these problems started.
Physical network is basic. Nothing special. Switch is CISCO Catalyst 3750. As I said in my first post, its a...
I added the pve-no-subscription repo to the list and ran an update. There was a long list and I did a dist-upgrade. Everything was installed fine, however, the problem persist.
Log of the upgrade and I also did a # systemctl restart pve-cluster:
Dec 9 13:18:08 VMNode01...
I updated from the pve-no-subscription repo approx 2 weeks ago. When was this fixed?
# systemctl restart pve-cluster executed fine, this is the log after I ran it:
root@VMNode01:~# tail -f /var/log/syslog
Dec 9 12:59:15 VMNode01 pvecm: ipcc_send_rec failed: Connection...
I had no option, but to get the node restarted. The VM started up fine after the reboot.
But now I am getting these errors in syslog:
Dec 9 12:08:58 VMNode01 pvedaemon: <root@pam> starting task UPID:VMNode01:00002B0D:0001C41D:56687C4A:srvrestart:pve-cluster:root@pam:
I had a situation, where (it looks like, cannot quite pinpoint the problem) one VM (Win2011 SBS Std) will start using close to 80-90% of CPU and lockup the proxmox server (server load was 30 (max)). Everything becomes VERY slow and unresponsive. It was so bad, that I could not open a...