Please also check /etc/corosync/corosync.conf . Does the addresses correct as the one you wanted? compare the nodes.
If possible please share from 2 nodes on this cluster:
~# pvecm status
~# cat /etc/pve/corosync.conf
Also collect and share...
Yesterday I was greeted with numerous unreachable services stemming from a Ceph health error on our VM cluster due to "1 full osd(s)" and "1 backfillfull osd(s)" resulting in "4 pool(s) full" and solved it. This was the ceph status panel:
As a...
Hallo,
mir ist aufgefallen Dass ich nach dem Hinzufügen von Speicherplatz bei einer VM den Erstellten Snapshot nicht mehr löschen kann wenn diese vor dem Hinzufügen von Speicherplatz schon bestand.
Hat hier jemand erfahrungen wie man die...
Wir haben für Moitoring einen Dienst der Pro Monitoring Slot geld kostet, (Ja es gibt NFR Lizenzen dazu haben wir aber nicht Genügend Slots verkauft)
und in diesem sind auch die Einstellungsmöglichkeiten Begrenzt.
Daher machen wir dass unter...
I proceeded to the vimdiff between the différent reports, no differences about the version of the dependences.
I changed the migration network too, thk yu for the advice.
Regards,
IDEZ Ugo
Set up udev rules on the host to map the devices to persistently named device nodes.
Then pass these consistently named device nodes to the containers.
Hi,
should you have enabled HA for the guest, note that having shared storage is a prerequisite for enabling HA and using HA can lead to such issues when there are local disks...
Hello IDES,
Alright, also If you haven't already, double check that the MTU settings are consistent between the nodes in the cluster.
I usually find it easier to verify the configuration by generating a system report from each node using the...
Hi everyone,
I still have a very old PVE 7.4-19 setup here with Thin-LVM on LSI hardware RAID. After a power outage, pve/data can no longer be activated. The volume group is 100% allocated, but the "data" inside it (VM disks & snapshots)...
Did you already run a health check for the device with e.g. smartctl?
What was the output of that command? I'd be a bit surprised if it did anything to the LVM, since the targeted /dev/sda contains partitions, your PV is on /dev/sda3 and does...
Hello,
I just realized wipefs is not the best solution, anytime I delete a VM, it raises the CPU of the storage too much, which is not ideal it can disrupt the storage in production. Is there any other ways to tackle this ghost VM issue?
So weit würde ich nicht gehen. :) Je nach Storage wird diese Name auch anderweitig verwendet, zB. zfs pool name. Und wenn bereits VM/CT drauf liegen, dann haben die auch den alten Storage Namen für ihre Disks eingetragen. Die storage.cfg zu...
Hello YaZoal,
Thank you for your help.
The MTU is the same on each interfaces of each node :
ip link | grep mtu
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
2: nic1...
Hello Abamalu,
Thank you for your help.
We have no drops or errors on the NIC2 interface:
ip -s link show nic2
3: nic2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000...
Hi,
are you using the same network for PBS and NFS traffic? How does the load look like when the issue occurs? It might transient issues where the check whether the NFS mount point is reachable times out.
There is a similar and newer topic here.
This problem occurs when creating a backup with big disk space. It depends with ballooning. I'm still investigating how to resolve it but it seems to me, it's necessary to disable ballooning on VM.