Wir haben für Moitoring einen Dienst der Pro Monitoring Slot geld kostet, (Ja es gibt NFR Lizenzen dazu haben wir aber nicht Genügend Slots verkauft)
und in diesem sind auch die Einstellungsmöglichkeiten Begrenzt.
Daher machen wir dass unter...
I proceeded to the vimdiff between the différent reports, no differences about the version of the dependences.
I changed the migration network too, thk yu for the advice.
Regards,
IDEZ Ugo
Set up udev rules on the host to map the devices to persistently named device nodes.
Then pass these consistently named device nodes to the containers.
Hi,
should you have enabled HA for the guest, note that having shared storage is a prerequisite for enabling HA and using HA can lead to such issues when there are local disks...
Hello IDES,
Alright, also If you haven't already, double check that the MTU settings are consistent between the nodes in the cluster.
I usually find it easier to verify the configuration by generating a system report from each node using the...
Hi everyone,
I still have a very old PVE 7.4-19 setup here with Thin-LVM on LSI hardware RAID. After a power outage, pve/data can no longer be activated. The volume group is 100% allocated, but the "data" inside it (VM disks & snapshots)...
Did you already run a health check for the device with e.g. smartctl?
What was the output of that command? I'd be a bit surprised if it did anything to the LVM, since the targeted /dev/sda contains partitions, your PV is on /dev/sda3 and does...
Hello,
I just realized wipefs is not the best solution, anytime I delete a VM, it raises the CPU of the storage too much, which is not ideal it can disrupt the storage in production. Is there any other ways to tackle this ghost VM issue?
So weit würde ich nicht gehen. :) Je nach Storage wird diese Name auch anderweitig verwendet, zB. zfs pool name. Und wenn bereits VM/CT drauf liegen, dann haben die auch den alten Storage Namen für ihre Disks eingetragen. Die storage.cfg zu...
Hello YaZoal,
Thank you for your help.
The MTU is the same on each interfaces of each node :
ip link | grep mtu
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
2: nic1...
Hello Abamalu,
Thank you for your help.
We have no drops or errors on the NIC2 interface:
ip -s link show nic2
3: nic2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000...
Hi,
are you using the same network for PBS and NFS traffic? How does the load look like when the issue occurs? It might transient issues where the check whether the NFS mount point is reachable times out.
There is a similar and newer topic here.
This problem occurs when creating a backup with big disk space. It depends with ballooning. I'm still investigating how to resolve it but it seems to me, it's necessary to disable ballooning on VM.
Hi everyone,
at present we have a production environment with 3 nodes Proxmox VE cluster (9.1.6) with a Lenovo DM5000H as NFS (4.2) vm/lxc storage and a dedicated physical Proxmox PBS (4.1.x).
PVE nodes e PBS enterprise repos.
All the appliance...
Hello,
Actually, I need the FQDN for host discovery in Centreon, so I'd like to be able to retrieve it without having to set up services that point to custom scripts. I'd like to have the FQDN reported directly instead of the host name
Ich habe gerade den PMG aktualisiert und an mehreren Stellen diese Fehlermeldung in rot erhalten:
Der PMG ist als virtuelle Maschine auf einem PVE installiert. Derzeit ist alles aktuell, Version ist 9.0.6.
Ist etwas kaputt oder muss etwas...