Any one who can confirm that this is an acceptable workaround (give one node 2 votes)
#on node 1
pvecm create -votes 2 cluster #to create the cluster with this server registering as 2 votes
pvecm status # this should show the local node having 2 votes
#on node 2
pvecm add <ip of first node>
At some point the use of having a prefab system is gone. Proxmox was nice in the past but in the last year I had to rebuild all my openVZ machines to work in KVM/LXC which didn't work as described because I apparently used the wrong (default) compression to backup before installing the new...
System is been running fine since friday night without making backups. Still waiting for a solution to get the backups to run without trashing the system. Which worked fine in the debian netinstall based set up I had before.
@LnxBil Crashed again during backups (qm locks and tmp files are there to proof) and of course no dmesg file in /var/crash/
Uploaded syslog march 4 around 1:30 you see a storm of errrors, delays and warnings before reboot
I know but I still need a debug kernel to give kdump a chance of doing anything, right?
root@solo-prox-01:~# service kdump-tools status
● kdump-tools.service - Kernel crash dump capture service
Loaded: loaded (/lib/systemd/system/kdump-tools.service; enabled)
Active: active (exited) since...
@LnxBil
root@solo-prox-01:~# ls /var/crash/
kexec_cmd
but since the one Windows VM I have is down we have an 'historical' uptime
root@solo-prox-01:~# uptime
15:24:24 up 1 day, 7:36, 3 users, load average: 1.83, 2.24, 2.19
As i mentioned there is no vmlinuz in /boot
FYI the system doesn't go down when it's being tortured with prime95 and I already replaced the memory so IT wasn't the memory (or I have very little luck).When I do huge amounts of encrypting data, untarring large archives, moving data, ... or just run my backups (Proxmox backup) that's when...
CX430M power supply 1 year old
supermicro c7p67 with I5 2400 (bios R 2.0)
4 * kingston 8GB valuaram (brand new)
5 * WD20EADS/EZRX + 1 WD40EFRX
2 * MX200 SSD 250GB
1 extra cheap SATA controller ASM1062
1 Intel quad 82571EB Gbit pcie card
All connected to a APC Smart-UPS 1500. I run a bond of 2...
I'm really getting desperate... The last year a V3.4 and later 4.X through Debian worked fine on that machine and now suddenly it crashes without warning...
root@solo-prox-01:~# service kdump-tools status
● kdump-tools.service - Kernel crash dump capture service
Loaded: loaded (/lib/systemd/system/kdump-tools.service; enabled)
Active: active (exited) since Wed 2016-03-02 07:48:16 CET; 1h 59min ago
Process: 3140...
Had a random reboot this night with the crash kernel configuration.
crash: cannot find booted kernel -- please enter namelist argument
Any suggestions?
still a reboot and this is all I retrieve
Mar 1 00:00:02 solo-prox-01 vzdump[23140]: <root@pam> starting task UPID:solo-prox-01:00005A65:0023D34D:56D4CD72:vzdump::root@pam:
Mar 1 00:00:03 solo-prox-01 qm[23144]: <root@pam> update VM 103: -lock backup
Mar 1 00:05:41 solo-prox-01...
To mark this as resolved:
I did ->
remove the swap partition on the zfs with "zfs destroy rpool/swap"
Uncommented the zfs swap in /etc/fstab
created 2 new partitions on my cache SSDs and made a striped swap
ram usage seems to be stable for now. Will update if I see the thing crash again.
Yeah I had set-up a set of swap partitions on my cache SSD's but the machine didn't came back after reboot. Will see later today when I'm home what is stalling the reboot/start up. But it's a pity a 'production' system is setting up a swap that guarantees instability...
This is the first time that I used the installer instead of setting up a minimal debian. Now I was doing this to get a native ZFS installation in place but I'm already regretting that I started this.
My system was using vast amounts of RAM. To conquer this I upgraded from 16 to 32GB RAM but the...
In short: The upgrade to kernel 4 could resolve for Realtek ethenet devices to be renamed and getting one or more connections of your Proxmox disconnected. I solved this by removing the mac addresses in the persistant-net rules and replacing them by KERNELS=="" where the value is the PCI address...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.