there is already a patch available for the issue w.r.t bonds - the initial fix for the nic prefix broke the bonds:
https://lore.proxmox.com/pve-devel/20251216094329.36089-1-s.hanreich@proxmox.com/T/#u
there is already a patch available for the issue w.r.t bonds - the initial fix for the nic prefix broke the bonds:
https://lore.proxmox.com/pve-devel/20251216094329.36089-1-s.hanreich@proxmox.com/T/#u
I've run into this today after I performed an full-upgrade and rebooted one node in a cluster. Now I am unable to migrate VMs to that node.
All physical interfaces start with "eth". I do have a bond0 interface that is used for the bridged port...
I have just installed Proxmox 9.1 on my new Dell Wyse 5070.
My router is a fritz box 7530 and the Wyse is connected via LAN. So I have given the Proxmox Server the IP 192.168.178.74 and if I look on the web front end of the fritz box I can see...
Anyone can me explain why the IO pressure stall always display average 0.1% on ZFS on 4x NVMe, but LVM on hardware raid with SSD usually display zero for all time, except rare bursts when VMs is cloning?
This is a patch that you can apply for current Proxmox 9.x and PBS 4.x systems.
Edit:
Unfortunately it needs a reboot. Only tested for PVE (Thanks @TaktischerSpeck) PDM / PBS might still need a reboot
Updated for PDM, Update for reboot...
@warlocksyno @curruscanis Have either of you notice any irregularity viewing the disks in the UI (pve>disks in the menu)? I get communication failures but not all hosts are effected. i.e. Connection refused (595), Connection timed out (596)
Hi,
Could you please check the `/etc/hosts` file? I would also check our wiki guide on renaming PVE node [0].
[0] https://pve.proxmox.com/wiki/Renaming_a_PVE_node
You're absolutely right. I had already looked at this particular snippet, but I must have suffered from severe troubleshooting/afternoon fatigue. I have symlinked from /etc/pve/priv/ceph now to one of the paths kvm is looking for, and the VM came...
Meine Frage ist: Was mache ich am besten aus meiner neuen UGreen-NAS, die eine ältere Synology ablösen soll. Sie soll in einer nicht produktiven HomeLab-Umgebung zwei Aufgaben erfüllen: Ca. 6 TB Backup-Dateien aufnehmen und als Datastore für...
Hello,
I used to be able to upload isos from the web ui. This is a basic feature, of course it worked, never had a problem with pve 8. Somewhere since PVE9, maybe 9.1, I can no longer upload isos. The system io stall goes up to 90 and the...
Hello,
Facing the same problem since I upgraded all my PBS to pbs4. Running 5 pbs-servers / 8 pve clusters, we had freezes on different VMs on different PVE versions.
I've rolled back all my pbs to 6.14.11-4-pve. I'll test the latest "test-pve"...
Maybe try updating your nodes; the release notes of today's qemu-server 9.1.2 update includes:
EDIT: This is available on the no-subscription repository: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_package_repositories
EDIT2...
I have the same problem since i updated today, you need to set the manual maintenance mode before rebooting. for some reasons migrating with this works fine. when its stuck in the faulty state it helps to shutdown the vm (not in the proxmox...
Thanks Chris for the info,
however, we're in the "feature freeze" phase leading up to the Christmas holidays, and I don't feel like testing a kernel that might work well for a few days and then, on December 25th, crashes my system just as I'm...
It looks like I managed to implement this successfully with the following hookscript
#!/bin/bash
if [ "$2" == "pre-start" ]
then
echo "Move the disk from NAS to SSD"
nohup /usr/sbin/qm disk move "$1" sata0 local-zfs --delete true...
Ok, I have some news:
My pfSense/OpenBSD VM and a Windows XP VM seem to have triggered ballooning - their mem usage went down significantly - despite showing
"actual=1024 max_mem=1024" using "info balloon" in monitor.
The mentioned Windows...
OMG!! 40 LXCs each with 4 vcpu and 4gb RAM? Running on a 16/32 core CPU combined with consumer NAS drives? Massive overcommit, no wonder that you‘re running in I/O stalls. I don’t believe that the system ever ran normally. Otherwise I through my...
I do have 2 sockets in the VM configuration:
For some reason the VM thinks it has 1 socket with 360 CPUs, but the socket and core count in VM is correct.
With no ballooning and no ksm it did also crash...
I will try (exceptionally) disabling swap on the host and see how it performs
But as was stated in this thread and other places before, that should not be a desirable running config.