Can't wake up VM after suspending through client OS

8192K

Member
Apr 12, 2024
42
4
8
I am running a Manjaro VM on Proxmox 8.3.
If I want to suspend the VM by calling "sleep" from the main menu in Manjaro, the VM goes to sleep, but there is no way to wake it up again.

In Proxmox it is still listed as "online" and all I can do is call "stop" on it.

Something else that might be related to it: In the display settings I set "screen off" after 10 minutes of inactivity. But it only shows as black and the screen is still on.

I am passing through an nVidia GPU, but it is happening with an AMD GPU as well. Manjaro is running KDE on Wayland. Happens with X as well.

How can I fix these issues?
 
Last edited:
  • Like
Reactions: goober_gabe
After some investigation this feels like it's not really possible. One solution could be for the qemu agent to hook into into the suspend call and call qm suspend probably. If that's possible. Somebody please confirm.

Another way could be to send a WOL package to the VM MAC address. Does that work? How could this be achieved?
 
Hi,
what is the output of qm status <ID> --verbose when the VM is in this state? Please also share the VM configuration. Is ACPI enabled? Is the issue also present if you don't use GPU passthrough?
 
Yes, ACPI is on.
This is the config:

Code:
agent: 1
balloon: 16384
bios: ovmf
boot: order=scsi0;net0
cores: 24
cpu: host
efidisk0: consumer-pool:vm-400-disk-0,efitype=4m,size=4M
hostpci0: mapping=GeForce3060_1,pcie=1,x-vga=1
hostpci1: mapping=USB_Linux
machine: q35
memory: 65536
meta: creation-qemu=8.1.5,ctime=1717683002
name: Manjaro
net0: virtio=BC:24:11:D0:87:2E,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: consumer-pool:vm-400-disk-1,cache=writeback,discard=on,iothread=1,size=256G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=62a5244a-a695-4a69-b4fb-0a90af31a7cf
sockets: 1
vga: none
vmgenid: 271e5f61-3c8c-4853-b83d-97aff0cf9422

Here's the output of qm status --verbose after the VM was suspended from within:

Code:
balloon: 68719476736
balloon_min: 17179869184
ballooninfo:
    actual: 68719476736
    max_mem: 68719476736
blockstat:
    efidisk0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        failed_zone_append_operations: 0
        flush_operations: 0
        flush_total_time_ns: 0
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        invalid_zone_append_operations: 0
        rd_bytes: 0
        rd_merged: 0
        rd_operations: 0
        rd_total_time_ns: 0
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 0
        wr_highest_offset: 219648
        wr_merged: 0
        wr_operations: 0
        wr_total_time_ns: 0
        zone_append_bytes: 0
        zone_append_merged: 0
        zone_append_operations: 0
        zone_append_total_time_ns: 0
    pflash0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        failed_zone_append_operations: 0
        flush_operations: 0
        flush_total_time_ns: 0
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        invalid_zone_append_operations: 0
        rd_bytes: 0
        rd_merged: 0
        rd_operations: 0
        rd_total_time_ns: 0
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 0
        wr_highest_offset: 0
        wr_merged: 0
        wr_operations: 0
        wr_total_time_ns: 0
        zone_append_bytes: 0
        zone_append_merged: 0
        zone_append_operations: 0
        zone_append_total_time_ns: 0
    scsi0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        failed_zone_append_operations: 0
        flush_operations: 105923
        flush_total_time_ns: 47039929102
        idle_time_ns: 311564201551
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        invalid_zone_append_operations: 0
        rd_bytes: 7173836800
        rd_merged: 0
        rd_operations: 197157
        rd_total_time_ns: 23023411977
        timed_stats:
        unmap_bytes: 121295339520
        unmap_merged: 0
        unmap_operations: 53984
        unmap_total_time_ns: 18176241447
        wr_bytes: 14958768640
        wr_highest_offset: 259780104192
        wr_merged: 0
        wr_operations: 444345
        wr_total_time_ns: 80786611126
        zone_append_bytes: 0
        zone_append_merged: 0
        zone_append_operations: 0
        zone_append_total_time_ns: 0
cpus: 24
disk: 0
diskread: 7173836800
diskwrite: 14958768640
maxdisk: 274877906944
maxmem: 68719476736
mem: 64998134668
name: Manjaro
netin: 2505006688
netout: 64684844
nics:
    tap400i0:
        netin: 2505006688
        netout: 64684844
pid: 2946
proxmox-support:
    backup-fleecing: 1
    backup-max-workers: 1
    pbs-dirty-bitmap: 1
    pbs-dirty-bitmap-migration: 1
    pbs-dirty-bitmap-savevm: 1
    pbs-library-version: 1.5.1 (UNKNOWN)
    pbs-masterkey: 1
    query-bitmap-info: 1
qmpstatus: running
running-machine: pc-q35-9.0+pve0
running-qemu: 9.0.2
shares: 1000
status: running
uptime: 9157
vmid: 400
I don't want to check this without GPU passthrough as after the last time I did this the KDE session just didn't start any more.
 
Last edited:
Code:
qmpstatus: running
So the VM did not go into suspended mode i.e. S3 state from QEMU's perspective. I'd either try to configure the sleep mechanism in the guest differently or just disable it.
 
That's what I said in the first post. It is still in running state.
I am unable to set it to S3. cat /sys/power/mem_sleep returns s2idle [deep] and I can't change that to only 'deep' with echo deep > /sys/power/mem_sleep. Does not seem to be a Proxmox issue then.