Hi all.
We are in a process of upgrading an extenting our 4 node cluster.
When we setup a new node, is it possible to add this node to the existing versin 6.4 cluster.
We have ceph version 15.2 running on the existing cluster. So first installng ceph version 15 on the version 7 node should...
Hi Fabian,
Thanks for your reply,
Viewed the bug report and likely this could be right.
I will test the packages when they ,,arrive" and come back
Best regards
Lukas
Hey all,
I observed a strange reboot off all my cluster nodes as soon as on one specific host cororsync is restarted or this host rebooted.
I have 7 hosts in one cluster
Corosync has 2 links configured. ring0 is on a separate network on separate switch. ring1 is shared as VLAN over 10G fiber...
Hey All,
Das schein dann doch ein Bug in Check_mk.
Alle Cluster können korrekt abgefragt werden.
Sobald aber in einem Cluster ein Host down ist (auch beabsichtigt), läuft der special Agent auf den JSONDecode Error. Sobald alles hosts wieder Up, kommt ein korrekter Output.
Gruss Lukas
OK,.... was ich gefunden habe ist, dass ich beide cluster per curl erreichen kann und abfragen:
z.B. /nodes
Auf den ,,laufenden Cluster", der per cmk special-agent abgefragt werden kann....
curl --insecure --cookie "$(<cookie)" https://10.1.0.11:8006/api2/json/nodes/...
Leider nur ein:
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
ein mitgegebenes --debug dann noch diesen trace:
Traceback (most recent call last):
File "/omd/sites/mc/share/check_mk/agents/special/agent_proxmox_ve", line 10, in <module>
main()
File...
Hallo Stoiko Ivanov,
Danke für die Rückmeldung. cmk -d hostname fürhrt sowohl den agent check auf dem Host aus, der tadellos funktioniert, wie auch den oben beschriebenen ,,spezial Agent". Dieser fragt per https vom cmk-host den Proxmox Cluster über die Proxmox api ab.
Der Aufruf erwartet...
Ich versuche Proxmox VE 6.4 Cluster mit einem upgegradeten
check_mk der Version 2.0 zu monitoren.
Check_mk 2.0 liefert einen special Agent, der die Proxmox API nutzt.
Auf einem Cluster (beide Cluster haben den gleichen Patch Stand) bekomme ich auch brauchbare Antworten aus der API:
Auf dem...
You are right, it's strange.
CPU load is about or lower 1 when doing the backups.
We have 2 NFS mounts on the cluster. One to a local QNAP which is running fine without any issues.
The other is remote connected via Gateway and IPSec. This one produces the ,,not responding" messages which is...
Thanks for ypur answer ,,spirit" !
csync and backup so far:
This is corosync:
for ,,one" node:
node {
name: pve56
nodeid: 7
quorum_votes: 1
ring0_addr: 192.168.24.56
ring1_addr: 192.168.25.56
}
Where 192.168.240/24 is a separate Network with dedicated switch and 192.168.25.0/24 is...
Hi all,
Since Version 6.0 up to now Version 6.2 we see the follwoing behavior running backups over WAN to NFS.
We have a 8 hosts cluster (all HP DL380 G7 up to W9) runnung fine.
When doing backups over a WAN connection to a QNAP we first we see al lot of this
May 21 22:39:03 pve56 kernel...
I can confirm Version pve-kernel-5.0.21-4-pve: 5.0.21-9
is working correct on 8 x Intel(R) Xeon(R) CPU E5320
So this is fixed now.
Thanks so much for great work
I think this pve-kernel-5.0.21-4-pve cause Debian guests to reboot loop on older intel CPUs is what you are talking about. The same good old dinosaurs. We use this old hosts for testing and DMZ hosts. Having a lot of HDD's for ceph, a tape drive.....
So I will follow the other discussion.
We have the same here on this Kernel panic on VM's since Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8
I noticed You have also a old mainboard HP G5 hardware. We also have upgraded newer hardware with no issue.
Try to boot the older kernel and see if the vm's are starting angain.
Since we updated to Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8 we can not start any VM's on this host.
Migrated VM's do have kernel panic as like this:
This are the package versions we are running on the host:
proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve)
pve-manager: 6.0-11 (running...
You are right. udev is the problem here.
So net.ifnames=0 as grub boot parameter leads to old ethx interface names and everything is working fine.
But the naming scheme mentioned in the doku ,,wiki/Network_Configuration" :
.....
We currently use the following naming conventions for device...
Yes this is on a host new installed from Proxmox VE Iso. After that we found interfaces named like ,,rename6" on a 4 port HP-network card.
Setting .link files is /etc/systemd/network and updating initramfs solved this.
An yes, systemd is not used by proxmox, so we also could not set...
We do have the following problem.
Have a network setup with 3 vmbridge interfaces für different VLAN blocks. This is due to usage of ibm-Blades where we can not bond the nics.
The setup is working on the blades.
This is the interfaces file on the blades:
auto lo
iface lo inet loopback...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.