I solved this issue by adding virtio_console to the /etc/modules-load.d/modules.conf and rebuild the initrd. After a reboot the device /dev/vport1p1 is created and the QEMU Guest Agent works as expected.
cat /etc/modules-load.d/modules.conf
# /etc/modules: kernel modules to load at boot time.
#...
It looks to me like the PCI region below 4GB (32bit) is on the VM where the guest agent is not working, and the PCI region at the end of memory (48bit) is on the machine where the guest agent is working.
Is there any idea how to move the PCI region with SEA BIOS?
Walter
I tried a "echo 1 > /sys/devices/pci0000:00/0000:00:08.0/rescan" on the VM where the guest agent do not work:
root@epaper-srv:~# echo 1 > /sys/devices/pci0000:00/0000:00:08.0/rescan
root@epaper-srv:~# Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:05.0: PCI bridge to [bus 01]
Apr 24 20:43:38...
I digged into the pci device, for a VM with an working guest agent, I see much more entry's in the pci device:
# ls /sys/devices/pci0000\:00/0000\:00\:08.0/virtio1/
total 0
-r--r--r-- 1 root root 4096 Apr 24 19:08 device
lrwxrwxrwx 1 root root 0 Apr 24 19:08 driver ->...
Now I updated to Proxmox 8.2 but stuck with Kernel 6.5, I guess the Linstore / DRBD modules are not kernel 6.8 ready. Unfortunately the update did not solve this issue.
I again verified the boot log for this VM, For PCI device: 0000:00:08 I only found this 4 entries:
Apr 24 18:28:15 epaper-srv...
In Release Notes, I found the answer in section: Known Issues & Breaking Changes. I pinned the kernel to current 6.5 kernel and installation continued. Unfortunately, manually I need to remove the 6.8 kernel.
apt remove proxmox-kernel-6.8 proxmox-kernel-6.8.4-2-pve-signed
apt dist-upgrade
Now...
I get this error while upgrading:
Building module:
Cleaning build area...
make -j32 KERNELRELEASE=6.8.4-2-pve -C src/drbd KDIR=/lib/modules/6.8.4-2-pve/build.......(bad exit status: 2)
Error! Bad return status for module build on kernel: 6.8.4-2-pve (x86_64)
Consult...
The only difference for this two machines are, on the working machine I see the device: /dev/vport2p1 and the symlink: org.qemu.guest_agent.0 -> ../vport2p1. But on the machine where the guest agent do not work, the device is missing.
Walter
Hi Fiona,
dmesg from this machine and dmesg from an working machine are very similar. No errors about kvm or the qemu-agent in dmesg.
Output as requested:
# journalctl -b -u qemu-guest-agent.service
Apr 24 11:37:51 epaper-srv systemd[1]: qemu-guest-agent.service: Bound to unit...
Hi Fiona,
I verified that all VM have the same kernel (current Debian 12), the same kernel command line (with exception of disk UUID).
Output as requested:
# sed 's/\x0/ /g' /proc/$(cat /var/run/qemu-server/407.pid)/cmdline
/usr/bin/kvm -id 407 -name epaper-srv,debug-threads=on -no-shutdown...
I have the same problem.
I installed the qemu-guest-agent on about 20 Debian 12 VMs, I also enabled the QEMU Guest Agent in the Option menu of every VM. Last I shutdown and started all VMs. Now on most VMs the qemu-guest-agent works as expected.
Unfortunately, only 4 VM the qemu-guest-agent...
I want to update my proxmox 7.4 to 8. We use Linbit DRBD 9 with proxmox 7.4. Question will the DRBD 9 kernel driver work with the new proxmox kernel 6.2? Thank you.
To answer my own question.
drbd-dkms needs internet access via port 2020 to: https://drbd.io:2020.
I added the port 2020 exception to the firewall and reinstalled drbd-dkms. Now the DRBD kernel extension is active and DRBD works again in the cluster.
I have a 3 node cluster and one node updated from 6.4 to 7.0. For the Linbit controller and DRBD I use this repository: "deb http://packages.linbit.com/proxmox/ proxmox-7 drbd-9.0 ". I followed all the steps in https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 and had no visible errors during...
Simply I want to manage all vms with pct ...., unfortunately qemu vms, we need to manage with qm ... Also qemu vms are slower than kvm vms. That's the reason why I want to convert qemu vms to kvm vms.
Hi,
I got a qemu vm, unfortunately all my other vms are kvm vms. Qemu uses different commands to manage vms. Is it possible to convert a qemu vm to a kvm vm?
Walter
Hi,
unfortunately Avira discontinued AV support on Linux, so I need to remove Avira from server and replaced it with G-Data Linux AV V 13.2.
After installing G-Data AV "apt-get update" reports the nasty 'pve-enterprise/binary-i386/Packages' error.
apt-get update
.....
W: Failed to fetch...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.