Dependency failed for QEMU Guest Agent

Hi,
As my experience you have to enable the agent WHILE the vm is off; if you enable it while running the "Enabled" label is red even if you stop and start the VM. I had the same issue and stopping the VM and disabling saving and enabling again the parameter it worked.
As I saw if you modify ANY parameter while the VM is active doesn't work even if you stop and start it. (using the "free" version)
if you shutdown and start the VM (or reboot but only when done from the web UI/CLI, not from within the guest), pending changes will be applied during the start. And some parameters can be hot-plugged/changed live.
 
  • Like
Reactions: F.R.
I have the same problem.

I installed the qemu-guest-agent on about 20 Debian 12 VMs, I also enabled the QEMU Guest Agent in the Option menu of every VM. Last I shutdown and started all VMs. Now on most VMs the qemu-guest-agent works as expected.
Unfortunately, only 4 VM the qemu-guest-agent can't start.
This 4 VMs are also Debian 12 with current updates. Trying to start qemu-guest-agent, qemu-guest-agent complains about missing device: /dev/virtio-ports/org.qemu.guest_agent.0, also the device /dev/vport1p1 is not created.

a lspci -k shows the virtio controllers:

Code:
~# lspci -k
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
        Subsystem: Red Hat, Inc. Qemu virtual machine
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
        Subsystem: Red Hat, Inc. Qemu virtual machine
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
        Subsystem: Red Hat, Inc. Qemu virtual machine
        Kernel driver in use: ata_piix
        Kernel modules: ata_piix, ata_generic
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
        Subsystem: Red Hat, Inc. QEMU Virtual Machine
        Kernel driver in use: uhci_hcd
        Kernel modules: uhci_hcd
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
        Subsystem: Red Hat, Inc. Qemu virtual machine
        Kernel driver in use: piix4_smbus
        Kernel modules: i2c_piix4
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
        Subsystem: Red Hat, Inc. Device 1100
        Kernel modules: bochs
00:03.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon
        Subsystem: Red Hat, Inc. Virtio memory balloon
        Kernel driver in use: virtio-pci
        Kernel modules: virtio_pci
00:05.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:08.0 Communication controller: Red Hat, Inc. Virtio console
        Subsystem: Red Hat, Inc. Virtio console
        Kernel driver in use: virtio-pci
        Kernel modules: virtio_pci
00:0a.0 SCSI storage controller: Red Hat, Inc. Virtio block device
        Subsystem: Red Hat, Inc. Virtio block device
        Kernel driver in use: virtio-pci
        Kernel modules: virtio_pci
00:12.0 Ethernet controller: Red Hat, Inc. Virtio network device
        Subsystem: Red Hat, Inc. Virtio network device
        Kernel driver in use: virtio-pci
        Kernel modules: virtio_pci
00:1e.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:1f.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

I also reinstalled udev and systemd but this did not help.

Code:
apt install --reinstall udev systemd

I am a bit lost, without any idea how to solve this issue.
Do someone have an idea how to solve this issue?
 
Hi,
I have the same problem.

I installed the qemu-guest-agent on about 20 Debian 12 VMs, I also enabled the QEMU Guest Agent in the Option menu of every VM. Last I shutdown and started all VMs. Now on most VMs the qemu-guest-agent works as expected.
Unfortunately, only 4 VM the qemu-guest-agent can't start.
This 4 VMs are also Debian 12 with current updates. Trying to start qemu-guest-agent, qemu-guest-agent complains about missing device: /dev/virtio-ports/org.qemu.guest_agent.0, also the device /dev/vport1p1 is not created.

a lspci -k shows the virtio controllers:

Code:
~# lspci -k
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
        Subsystem: Red Hat, Inc. Qemu virtual machine
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
        Subsystem: Red Hat, Inc. Qemu virtual machine
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
        Subsystem: Red Hat, Inc. Qemu virtual machine
        Kernel driver in use: ata_piix
        Kernel modules: ata_piix, ata_generic
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
        Subsystem: Red Hat, Inc. QEMU Virtual Machine
        Kernel driver in use: uhci_hcd
        Kernel modules: uhci_hcd
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
        Subsystem: Red Hat, Inc. Qemu virtual machine
        Kernel driver in use: piix4_smbus
        Kernel modules: i2c_piix4
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
        Subsystem: Red Hat, Inc. Device 1100
        Kernel modules: bochs
00:03.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon
        Subsystem: Red Hat, Inc. Virtio memory balloon
        Kernel driver in use: virtio-pci
        Kernel modules: virtio_pci
00:05.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:08.0 Communication controller: Red Hat, Inc. Virtio console
        Subsystem: Red Hat, Inc. Virtio console
        Kernel driver in use: virtio-pci
        Kernel modules: virtio_pci
00:0a.0 SCSI storage controller: Red Hat, Inc. Virtio block device
        Subsystem: Red Hat, Inc. Virtio block device
        Kernel driver in use: virtio-pci
        Kernel modules: virtio_pci
00:12.0 Ethernet controller: Red Hat, Inc. Virtio network device
        Subsystem: Red Hat, Inc. Virtio network device
        Kernel driver in use: virtio-pci
        Kernel modules: virtio_pci
00:1e.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:1f.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

I also reinstalled udev and systemd but this did not help.

Code:
apt install --reinstall udev systemd

I am a bit lost, without any idea how to solve this issue.
Do someone have an idea how to solve this issue?
please share the output of pveversion -v and cat /etc/pve/qemu-server/<ID>.conf for a problematic VM.
 
Hi,

as requested:
Code:
# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.13-5-pve)
pve-manager: 8.1.10 (running version: 8.1.10/4b06efb5db453f29)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-9
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.13-3-pve-signed: 6.5.13-3
pve-kernel-5.15.131-2-pve: 5.15.131-3
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.3
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.5
libpve-cluster-perl: 8.0.5
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.6
libpve-network-perl: 0.9.6
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.5-1
proxmox-backup-file-restore: 3.1.5-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.5
pve-cluster: 8.0.5
pve-container: 5.0.9
pve-docs: 8.1.5
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.11-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.1.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2

and

Code:
# cat /etc/pve/qemu-server/407.conf
acpi: 1
agent: 1,fstrim_cloned_disks=1
boot: order=virtio0;net0
cores: 2
cpu: host
kvm: 1
memory: 2048
meta: creation-qemu=8.1.2,ctime=1702296729
name: epaper-srv
net0: virtio=BC:24:11:35:46:C5,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=2135d2a4-fc72-41b7-b9c6-6d291a0b7ac1
sockets: 1
virtio0: drbd_ssd1:vm-407-disk-2,iothread=1,size=20975192K
vmgenid: bdaaa1b6-202b-43f3-af7f-ca0c6dcb
 
Is there anything different between the problematic VMs and the ones where it works (are kernel and kernel parameters the same)?

Please share the output of
Code:
sed 's/\x0/ /g' /proc/$(cat /var/run/qemu-server/407.pid)/cmdline
 
Hi Fiona,

I verified that all VM have the same kernel (current Debian 12), the same kernel command line (with exception of disk UUID).

Output as requested:

Code:
# sed 's/\x0/ /g' /proc/$(cat /var/run/qemu-server/407.pid)/cmdline
/usr/bin/kvm -id 407 -name epaper-srv,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/407.qmp,server=on,wait=off -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/407.pid -daemonize -smbios type=1,uuid=2135d2a4-fc72-41b7-b9c6-6d291a0b7ac1 -smp 2,sockets=1,cores=2,maxcpus=2 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/407.vnc,password=on -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 2048 -object iothread,id=iothread-virtio0 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5 -device vmgenid,guid=bdaaa1b6-202b-43f3-af7f-ca0c6dcba75e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device VGA,id=vga,bus=pci.0,addr=0x2 -chardev socket,path=/var/run/qemu-server/407.qga,server=on,wait=off,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on -iscsi initiator-name=iqn.1993-08.org.debian:01:c9168f29e817 -drive file=/dev/drbd/by-res/vm-407-disk-2/0,if=none,id=drive-virtio0,format=raw,cache=none,aio=io_uring,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,bootindex=100 -netdev type=tap,id=net0,ifname=tap407i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=BC:24:11:35:46:C5,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=101 -machine type=pc+pve0
 
What is the output of journalctl -b -u qemu-guest-agent.service in the guest? You might want to check the boot log for any related errors/warnings too.

What is the output of stat /run/qemu-server/407.qga on the host?
 
Hi Fiona,

dmesg from this machine and dmesg from an working machine are very similar. No errors about kvm or the qemu-agent in dmesg.


Output as requested:

Code:
# journalctl -b -u qemu-guest-agent.service
Apr 24 11:37:51 epaper-srv systemd[1]: qemu-guest-agent.service: Bound to unit dev-virtio\x2dports-org.qemu.guest_agent.0.device, but unit isn't active.
Apr 24 11:37:51 epaper-srv systemd[1]: Dependency failed for qemu-guest-agent.service - QEMU Guest Agent.
Apr 24 11:37:51 epaper-srv systemd[1]: qemu-guest-agent.service: Job qemu-guest-agent.service/start failed with result 'dependency'.

and

Code:
# stat /run/qemu-server/407.qga
  File: /run/qemu-server/407.qga
  Size: 0               Blocks: 0          IO Block: 4096   socket
Device: 0,25    Inode: 26928       Links: 1
Access: (0750/srwxr-x---)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2024-04-24 11:37:30.712584311 +0200
Modify: 2024-04-24 11:37:16.736748112 +0200
Change: 2024-04-24 11:37:16.736748112 +0200
 Birth: 2024-04-24 11:37:16.736748112 +0200


Walter
 
What do the following show?
Code:
systemctl status dev-virtio\\x2dports-org.qemu.guest_agent.0.device
grep '' /sys/devices/pci0000:00/0000:00:08.0/virtio*/status
ls /sys/devices/pci0000:00/0000:00:08.0/virtio*/virtio-ports

Just a guess, but you could try echo 1 > /sys/devices/pci0000:00/0000:00:08.0/rescan and see if you can start the guest agent afterwards.

Does the boot log not give any hint at all, i.e. journalctl -b?
 
Hi,

Code:
# systemctl status dev-virtio\\x2dports-org.qemu.guest_agent.0.device
○ dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0
     Loaded: loaded
     Active: inactive (dead)

Apr 24 11:37:51 epaper-srv systemd[1]: Unnecessary job was removed for dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0.

Code:
# ls /sys/devices/pci0000:00/0000:00:08.0/virtio*/virtio-ports
ls: cannot access '/sys/devices/pci0000:00/0000:00:08.0/virtio*/virtio-ports': No such file or directory

Do not help:

Code:
# echo 1 > /sys/devices/pci0000:00/0000:00:08.0/rescan
# systemctl restart qemu-guest-agent.service
A dependency job for qemu-guest-agent.service failed. See 'journalctl -xe' for details.
# ls /sys/devices/pci0000:00/0000:00:08.0/virtio*/virtio-ports
ls: cannot access '/sys/devices/pci0000:00/0000:00:08.0/virtio*/virtio-ports': No such file or directory
 
Now I updated to Proxmox 8.2 but stuck with Kernel 6.5, I guess the Linstore / DRBD modules are not kernel 6.8 ready. Unfortunately the update did not solve this issue.

I again verified the boot log for this VM, For PCI device: 0000:00:08 I only found this 4 entries:
Code:
Apr 24 18:28:15 epaper-srv kernel: pci 0000:00:08.0: [1af4:1003] type 00 class 0x078000
Apr 24 18:28:15 epaper-srv kernel: pci 0000:00:08.0: reg 0x10: [io  0xf0c0-0xf0ff]
Apr 24 18:28:15 epaper-srv kernel: pci 0000:00:08.0: reg 0x14: [mem 0xfea52000-0xfea52fff]
Apr 24 18:28:15 epaper-srv kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfd604000-0xfd607fff 64bit pref]

For qemu I only found this line:
Code:
Apr 24 18:28:15 epaper-srv kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014

The only possible error message:
Code:
Apr 24 18:28:16 epaper-srv acpid[368]: cannot open input layer

BTW: I also found one Windows 2022 VM where the guest agent is not working. But event log do not provide any use full information.

So no idea how to solve this issue.
 
I digged into the pci device, for a VM with an working guest agent, I see much more entry's in the pci device:
Code:
# ls /sys/devices/pci0000\:00/0000\:00\:08.0/virtio1/
total 0
-r--r--r-- 1 root root 4096 Apr 24 19:08 device
lrwxrwxrwx 1 root root    0 Apr 24 19:08 driver -> ../../../../bus/virtio/drivers/virtio_console
-r--r--r-- 1 root root 4096 Apr 24 19:08 features
-r--r--r-- 1 root root 4096 Apr 24 19:08 modalias
drwxr-xr-x 2 root root    0 Apr 24 19:08 power
-r--r--r-- 1 root root 4096 Apr 24 19:08 status
lrwxrwxrwx 1 root root    0 Apr 24 17:51 subsystem -> ../../../../bus/virtio
-rw-r--r-- 1 root root 4096 Apr 24 17:51 uevent
-r--r--r-- 1 root root 4096 Apr 24 19:08 vendor
drwxr-xr-x 3 root root    0 Apr 24 17:51 virtio-ports

And below virtio-ports we see the vport1p1:
Code:
# ls /sys/devices/pci0000\:00/0000\:00\:08.0/virtio1/virtio-ports/
total 0
drwxr-xr-x 3 root root 0 Apr 24 17:51 vport1p1


The VM where the guest agent do not work and the devices are missing, I see less entry's in the pci device:
Code:
# ls /sys/devices/pci0000\:00/0000\:00\:08.0/virtio1/
total 0
-r--r--r-- 1 root root 4096 Apr 24 19:09 device
-r--r--r-- 1 root root 4096 Apr 24 19:09 features
-r--r--r-- 1 root root 4096 Apr 24 18:28 modalias
drwxr-xr-x 2 root root    0 Apr 24 18:28 power/
-r--r--r-- 1 root root 4096 Apr 24 19:09 status
lrwxrwxrwx 1 root root    0 Apr 24 19:09 subsystem -> ../../../../bus/virtio/
-rw-r--r-- 1 root root 4096 Apr 24 18:28 uevent
-r--r--r-- 1 root root 4096 Apr 24 19:09 vendor
The driver and virtio-ports items are missing.

The big question why the devices behave so different?
 
I tried a "echo 1 > /sys/devices/pci0000:00/0000:00:08.0/rescan" on the VM where the guest agent do not work:

Code:
root@epaper-srv:~# echo 1 > /sys/devices/pci0000:00/0000:00:08.0/rescan
root@epaper-srv:~# Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:05.0: PCI bridge to [bus 01]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:05.0:   bridge window [io  0xe000-0xefff]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:05.0:   bridge window [mem 0xfe800000-0xfe9fffff]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:05.0:   bridge window [mem 0xfd400000-0xfd5fffff 64bit pref]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:1e.0: PCI bridge to [bus 02]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:1e.0:   bridge window [io  0xd000-0xdfff]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:1e.0:   bridge window [mem 0xfe600000-0xfe7fffff]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:1e.0:   bridge window [mem 0xfd200000-0xfd3fffff 64bit pref]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:1f.0: PCI bridge to [bus 03]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:1f.0:   bridge window [io  0xc000-0xcfff]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:1f.0:   bridge window [mem 0xfe400000-0xfe5fffff]
Apr 24 20:43:38 epaper-srv kernel: pci 0000:00:1f.0:   bridge window [mem 0xfd000000-0xfd1fffff 64bit pref]


Running the same command, on a VM where the guest agent works, I see a slightly different output:
Code:
root@icc-web:~# echo 1 > /sys/devices/pci0000:00/0000:00:08.0/rescan
root@icc-web:~# Apr 24 20:43:54 icc-web kernel: pci 0000:00:05.0: PCI bridge to [bus 01]
Apr 24 20:43:54 icc-web kernel: pci 0000:00:05.0:   bridge window [mem 0xc1400000-0xc15fffff]
Apr 24 20:43:54 icc-web kernel: pci 0000:00:05.0:   bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref]
Apr 24 20:43:54 icc-web kernel: pci 0000:00:1e.0: PCI bridge to [bus 02]
Apr 24 20:43:54 icc-web kernel: pci 0000:00:1e.0:   bridge window [mem 0xc1200000-0xc13fffff]
Apr 24 20:43:54 icc-web kernel: pci 0000:00:1e.0:   bridge window [mem 0x380800000000-0x380fffffffff 64bit pref]
Apr 24 20:43:54 icc-web kernel: pci 0000:00:1f.0: PCI bridge to [bus 03]
Apr 24 20:43:54 icc-web kernel: pci 0000:00:1f.0:   bridge window [mem 0xc1000000-0xc11fffff]
Apr 24 20:43:54 icc-web kernel: pci 0000:00:1f.0:   bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref]
Apr 24 20:43:57 icc-web qemu-ga[468]: info: guest-ping called

The working VM do not have the io lines and different memory regions.
 
It looks to me like the PCI region below 4GB (32bit) is on the VM where the guest agent is not working, and the PCI region at the end of memory (48bit) is on the machine where the guest agent is working.

Is there any idea how to move the PCI region with SEA BIOS?
Walter
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!