There are cases when you make changes in your configurations, only to want to partially revert it back.
NOTE See how you should have already been backing it up [1].
Alternatively, you get hold of stale (from non quorate-node) or partially corrupt config.db - see also how to recover it [1] -...
Backup
A no-nonsense way to safely backup your /etc/pve files (pmxcfs [1]) is actually very simple:
sqlite3 /var/lib/pve-cluster/config.db .dump > ~/config.dump.$(date --utc +%Z%Y%m%d%H%M%S).sql
This is safe to execute on a running node and is only necessary on any single node of the...
Hi!
First i apologize because this topic is much discussed already (and i tried and read thru posts). I think everything behaves as expected and well in my test lab but i would like still ask you to confirm my understanding. I have multinode (23) pve v. 6.4 cluster without HA turned on and it...
I've been editing /etc/pve/lxc/vmid.conf files manually so far, because as far as I know, you cannot set lxc.idmap entries automatedly (is that right?).
That seems to work as I expect, requires restarting the container to apply, etc.
Now I'm automating more of my config, and I was thinking of...
Hello we had a very serious problem with the Proxmox cluster on OVH, our vRack stopped working.
We tried to solve the situation by hot adding a new corosync configuration with a redundancy link in it.
The configuration worked and corosync saw all members, we then restarted the proxmox cluster...
Hi all. I've been using Proxmox successfully for about a year and a half now - I have two boxes, and I use VMs, containers, and general services like samba on both. They are not in a HA cluster, they operate stand-alone. I also do not use subscription repositories. However I have a...
Wir betreiben einen proxmox VE Cluster mit 6 Nodes. Wegen Hardwareproblemen musste ien Node ersetzt werden. Diesen habe ich vorher aus dem Cluster entfernt und wollte nun den neuen Node dem Cluster hinzufügen. Dazu habe ich auf dem potentiellen neuen Node proxmox VE 6 installiert. Anschließend...
I have a 5 node cluster. One of the nodes had some issues, and so hardware needed to be changed. Now when I boot the node up, it doesn't seem to be participating in the pmxcfs. So the when I look at the "Datacenter" view, that node seen as down. And when I log into the :8006 port of that node...
I have (had) a 3 node Proxmox VE 6.2-11 and Ceph cluster. I'm modifying my config after install and some light use. Ceph is now on its own 10Gx2 LAN. I decided to dedicate a 1Gb interface and create a VLAN for corosync and attempted to modify corosync.conf before understanding exactly what...
Hello,
This morning we had a major incident, all our proxmox nodes where fenced at the same time.
In the log, this seems to be the problem:
messages.log
Sep 1 10:50:14 hostname kernel: [932130.006753] show_signal_msg: 6 callbacks suppressed
Sep 1 10:50:14 hostname kernel: [932130.006757]...
Hallo Zusammen
Ich habe wieder einmal ein sehr merkwürdiges Problem und komme nun nach allem Möglichen rum probieren einfach nicht auf die Ursache...
Vorneweg:
Ja alle Einträge in "/etc/hosts" und "/etc/network/interfaces" sind auf der neuen IP-Range.
Szenario:
Wir machen ein Backup der...
Hi,
I'm running a freshly installed & up-to-date 3-node Proxmox VE 5.3 cluster. Everything was fine until I tried to deploy custom SSL certificates for the web UI.
The process detailed in the documentation implies to add new files (pveproxy-ssl.pem & pveproxy-ssl.key) to the...
Hi Proxmoxers,
What could causing slow access (read and write) to pmxcfs which is mounted in /etc/pve in PVE 5.2 cluster?
For testing, It takes more than 10 seconds to create an empty file inside /etc/pve. There are no performance issue on the local storage and confirmed by mounting the pmxcfs...
I have a 5 nodes in cluster. On 4 node i have error.
май 28 13:42:56 CCVM4 corosync[32562]: error [TOTEM ] FAILED TO RECEIVE
май 28 13:42:56 CCVM4 corosync[32562]: [TOTEM ] FAILED TO RECEIVE
май 28 13:42:59 CCVM4 corosync[32562]: notice [TOTEM ] A new membership (192.168.211.23:287292) was...
One of my cluster's node has been reported by zabbix as having Lack of free swap space:
Free swap space in % (pve1:system.swap.size[,pfree]): 9.98 %
What could cause this and how to resolve?
It seems that others nodes don't have such problem.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.