Live Migration of VM with heavy RAM usage fails

Nov 24, 2020
20
12
8
Austria
Hello!

We are running a PVE 7.1-4 cluster with 2 nodes (and a third corosync device) with a dedicated 10Gbit/s cluster link (192.168.0.0/24)

When we try to migrate a VM which has heavy RAM usage (a video surveillance server) the migration fails every time and the VM gets stopped suddenly and the VM has to be restarted aigain ...

We are using the latest updates from the enterprise repo and also had the problem on 6.4 (we just did the pve6to7 upgrade successfully)

Any ideas?

Code:
task started by HA resource agent
2021-11-19 14:31:53 use dedicated network address for sending migration traffic (192.168.0.1)
2021-11-19 14:31:53 starting migration of VM 115 to node 'node1' (192.168.0.1)
2021-11-19 14:31:53 starting VM 115 on remote node 'node1'
2021-11-19 14:31:55 start remote tunnel
2021-11-19 14:31:55 ssh tunnel ver 1
2021-11-19 14:31:55 starting online/live migration on unix:/run/qemu-server/115.migrate
2021-11-19 14:31:55 set migration capabilities
2021-11-19 14:31:55 migration downtime limit: 100 ms
2021-11-19 14:31:55 migration cachesize: 2.0 GiB
2021-11-19 14:31:55 set migration parameters
2021-11-19 14:31:55 start migrate command to unix:/run/qemu-server/115.migrate
2021-11-19 14:31:56 migration active, transferred 178.7 MiB of 16.0 GiB VM-state, 293.3 MiB/s
2021-11-19 14:31:57 migration active, transferred 497.0 MiB of 16.0 GiB VM-state, 307.8 MiB/s
2021-11-19 14:31:58 migration active, transferred 797.6 MiB of 16.0 GiB VM-state, 319.8 MiB/s
2021-11-19 14:31:59 migration active, transferred 1.1 GiB of 16.0 GiB VM-state, 424.8 MiB/s
2021-11-19 14:32:06 migration active, transferred 2.8 GiB of 16.0 GiB VM-state, 315.6 MiB/s
2021-11-19 14:32:07 migration active, transferred 3.1 GiB of 16.0 GiB VM-state, 319.1 MiB/s
  ....
2021-11-19 14:32:20 migration active, transferred 6.6 GiB of 16.0 GiB VM-state, 364.4 MiB/s
2021-11-19 14:32:21 migration active, transferred 6.9 GiB of 16.0 GiB VM-state, 307.2 MiB/s
2021-11-19 14:32:42 migration active, transferred 12.2 GiB of 16.0 GiB VM-state, 274.3 MiB/s
2021-11-19 14:32:44 migration active, transferred 12.6 GiB of 16.0 GiB VM-state, 237.9 MiB/s
2021-11-19 14:32:45 migration active, transferred 12.9 GiB of 16.0 GiB VM-state, 250.1 MiB/s, VM dirties lots of memory: 262.4 MiB/s
2021-11-19 14:32:46 migration active, transferred 13.2 GiB of 16.0 GiB VM-state, 305.9 MiB/s
2021-11-19 14:32:47 migration active, transferred 13.4 GiB of 16.0 GiB VM-state, 356.3 MiB/s
2021-11-19 14:32:47 xbzrle: send updates to 63987 pages in 89.1 MiB encoded memory, cache-miss 93.40%, overflow 10581
2021-11-19 14:32:48 migration active, transferred 13.7 GiB of 16.0 GiB VM-state, 413.2 MiB/s
2021-11-19 14:32:48 xbzrle: send updates to 116032 pages in 225.8 MiB encoded memory, cache-miss 93.40%, overflow 35605
2021-11-19 14:32:59 xbzrle: send updates to 347783 pages in 790.5 MiB encoded memory, cache-miss 65.61%, overflow 121242
2021-11-19 14:33:00 migration active, transferred 16.5 GiB of 16.0 GiB VM-state, 294.0 MiB/s
2021-11-19 14:33:00 xbzrle: send updates to 388479 pages in 894.4 MiB encoded memory, cache-miss 65.61%, overflow 137694
2021-11-19 14:33:01 migration active, transferred 16.7 GiB of 16.0 GiB VM-state, 445.3 MiB/s
2021-11-19 14:33:01 xbzrle: send updates to 428831 pages in 1011.8 MiB encoded memory, cache-miss 65.61%, overflow 157000
2021-11-19 14:33:03 migration active, transferred 17.0 GiB of 16.0 GiB VM-state, 284.6 MiB/s, VM dirties lots of memory: 291.2 MiB/s
2021-11-19 14:33:03 xbzrle: send updates to 460003 pages in 1.1 GiB encoded memory, cache-miss 65.61%, overflow 172434
2021-11-19 14:33:04 migration active, transferred 17.2 GiB of 16.0 GiB VM-state, 327.5 MiB/s
2021-11-19 14:33:04 xbzrle: send updates to 498630 pages in 1.2 GiB encoded memory, cache-miss 65.61%, overflow 192016
... ...
2021-11-19 14:34:42 migration active, transferred 39.0 GiB of 16.0 GiB VM-state, 751.4 MiB/s
2021-11-19 14:34:42 xbzrle: send updates to 3977696 pages in 10.6 GiB encoded memory, cache-miss 45.54%, overflow 1766700
2021-11-19 14:34:43 migration active, transferred 39.2 GiB of 16.0 GiB VM-state, 322.4 MiB/s
2021-11-19 14:34:43 xbzrle: send updates to 4038051 pages in 10.7 GiB encoded memory, cache-miss 29.98%, overflow 1787119
2021-11-19 14:34:45 migration active, transferred 39.4 GiB of 16.0 GiB VM-state, 354.9 MiB/s, VM dirties lots of memory: 438.6 MiB/s
2021-11-19 14:34:45 xbzrle: send updates to 4099325 pages in 10.8 GiB encoded memory, cache-miss 14.73%, overflow 1811303
2021-11-19 14:34:46 migration active, transferred 39.7 GiB of 16.0 GiB VM-state, 543.4 MiB/s
2021-11-19 14:34:46 xbzrle: send updates to 4170823 pages in 10.9 GiB encoded memory, cache-miss 10.64%, overflow 1835116
2021-11-19 14:34:46 auto-increased downtime to continue migration: 800 ms
2021-11-19 14:34:47 migration active, transferred 39.9 GiB of 16.0 GiB VM-state, 337.8 MiB/s, VM dirties lots of memory: 374.8 MiB/s
2021-11-19 14:34:47 xbzrle: send updates to 4244688 pages in 11.1 GiB encoded memory, cache-miss 11.97%, overflow 1856522
2021-11-19 14:34:49 migration active, transferred 40.1 GiB of 16.0 GiB VM-state, 359.7 MiB/s, VM dirties lots of memory: 385.5 MiB/s
2021-11-19 14:34:49 xbzrle: send updates to 4316458 pages in 11.2 GiB encoded memory, cache-miss 16.51%, overflow 1881573
2021-11-19 14:34:49 auto-increased downtime to continue migration: 1600 ms
2021-11-19 14:34:50 migration active, transferred 40.3 GiB of 16.0 GiB VM-state, 1.3 GiB/s
2021-11-19 14:34:50 xbzrle: send updates to 4384104 pages in 11.4 GiB encoded memory, cache-miss 14.18%, overflow 1903430
query migrate failed: VM 115 qmp command 'query-migrate' failed - client closed connection

2021-11-19 14:34:53 query migrate failed: VM 115 qmp command 'query-migrate' failed - client closed connection
query migrate failed: VM 115 not running

2021-11-19 14:34:54 query migrate failed: VM 115 not running
query migrate failed: VM 115 not running

2021-11-19 14:34:55 query migrate failed: VM 115 not running
query migrate failed: VM 115 not running

2021-11-19 14:34:56 query migrate failed: VM 115 not running
query migrate failed: VM 115 not running

2021-11-19 14:34:57 query migrate failed: VM 115 not running
query migrate failed: VM 115 not running

2021-11-19 14:34:58 query migrate failed: VM 115 not running
2021-11-19 14:34:58 ERROR: online migrate failure - too many query migrate failures - aborting
2021-11-19 14:34:58 aborting phase 2 - cleanup resources
2021-11-19 14:34:58 migrate_cancel
2021-11-19 14:34:58 migrate_cancel error: VM 115 not running
2021-11-19 14:35:00 ERROR: migration finished with problems (duration 00:03:07)
TASK ERROR: migration problems

full log attached ...
 

Attachments

  • logs.txt
    28.8 KB · Views: 2
Last edited:
please provide the following:
- pveversion -v output on both nodes
- journalctl contents from around the time the issue occurs from both nodes (i.e., +-2minutes around the "query migrate failed: VM 115 qmp command 'query-migrate' failed - client closed connection")
 
syslog attached, what I have found on node2 is:

Code:
Nov 19 14:34:51 node2 QEMU[44361]: kvm: ../block/io.c:1989: bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.


Versions:

root@node1:~# pveversion -v proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) pve-manager: 7.1-4 (running version: 7.1-4/ca457116) pve-kernel-5.13: 7.1-4 pve-kernel-helper: 7.1-4 pve-kernel-5.4: 6.4-7 pve-kernel-5.13.19-1-pve: 5.13.19-2 pve-kernel-5.4.143-1-pve: 5.4.143-1 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph-fuse: 14.2.21-1 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown: residual config ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.22-pve2 libproxmox-acme-perl: 1.4.0 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.1-1 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-14 libpve-guest-common-perl: 4.0-3 libpve-http-server-perl: 4.0-3 libpve-storage-perl: 7.0-15 libqb0: 1.0.5-1 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-4 lxcfs: 4.0.8-pve2 novnc-pve: 1.2.0-3 proxmox-backup-client: 2.0.14-1 proxmox-backup-file-restore: 2.0.14-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.4-2 pve-cluster: 7.1-2 pve-container: 4.1-2 pve-docs: 7.1-2 pve-edk2-firmware: 3.20210831-2 pve-firewall: 4.2-5 pve-firmware: 3.3-3 pve-ha-manager: 3.3-1 pve-i18n: 2.6-1 pve-qemu-kvm: 6.1.0-2 pve-xtermjs: 4.12.0-1 qemu-server: 7.1-3 smartmontools: 7.2-pve2 spiceterm: 3.2-2 swtpm: 0.7.0~rc1+2 vncterm: 1.7-1 zfsutils-linux: 2.1.1-pve3 root@node2:~# pveversion pve-manager/7.1-4/ca457116 (running kernel: 5.13.19-1-pve) root@node2:~# pveversion -v proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) pve-manager: 7.1-4 (running version: 7.1-4/ca457116) pve-kernel-5.13: 7.1-4 pve-kernel-helper: 7.1-4 pve-kernel-5.4: 6.4-7 pve-kernel-5.13.19-1-pve: 5.13.19-2 pve-kernel-5.4.143-1-pve: 5.4.143-1 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph-fuse: 14.2.21-1 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown: residual config ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.22-pve2 libproxmox-acme-perl: 1.4.0 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.1-1 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-14 libpve-guest-common-perl: 4.0-3 libpve-http-server-perl: 4.0-3 libpve-storage-perl: 7.0-15 libqb0: 1.0.5-1 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-4 lxcfs: 4.0.8-pve2 novnc-pve: 1.2.0-3 proxmox-backup-client: 2.0.14-1 proxmox-backup-file-restore: 2.0.14-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.4-2 pve-cluster: 7.1-2 pve-container: 4.1-2 pve-docs: 7.1-2 pve-edk2-firmware: 3.20210831-2 pve-firewall: 4.2-5 pve-firmware: 3.3-3 pve-ha-manager: 3.3-1 pve-i18n: 2.6-1 pve-qemu-kvm: 6.1.0-2 pve-xtermjs: 4.12.0-1 qemu-server: 7.1-3 smartmontools: 7.2-pve2 spiceterm: 3.2-2 swtpm: 0.7.0~rc1+2 vncterm: 1.7-1 zfsutils-linux: 2.1.1-pve3 root@node2:~#
 

Attachments

  • journalctl.txt
    27.5 KB · Views: 1
it seems you can reproduce the issue (we had sporadic reports, but no clear reproducer yet), it would be great if you could install the pve-qemu-dbg package, and then collect a backtrace by doing the following on the source node inside a tmux or screen session:

Code:
VMID=105 # change this if needed!
VM_PID=$(cat /var/run/qemu-server/${VMID}.pid)

gdb -p $VM_PID -ex='handle SIGUSR1 nostop noprint pass' -ex='handle SIGPIPE nostop print pass' -ex='set logging on' -ex='set pagination off' -ex='cont'

leave the gdb process running, then in a second shell or over the GUI start the migration - once the migration fails you should be able to enter commands in the gdb shell inside the first shell, please enter threads apply all bt, followed by quit. you should now have a gdb.txt file containing all the output of gdb including the back traces of the crash - please attach that here (you can start the VM normally again after collecting those traces).

please also provide the VM config together with the back traces!
 
The package pve-qemu-dbg was not found, I have installed pve-qemu-kvm-dbg and also gdb via apt install gdb manually ...

I started the VM migration 115, but I think something did not work, I got an early abort:

Code:
Attaching to process 52890
[New LWP 52891]
[New LWP 52893]
[New LWP 52933]
[New LWP 52935]
[New LWP 52936]
[New LWP 52937]
[New LWP 52938]
[New LWP 52939]
[New LWP 52940]
[New LWP 52941]
[New LWP 52942]
[New LWP 52943]
[New LWP 52944]
[New LWP 52945]
[New LWP 52946]
[New LWP 52949]
[New LWP 60287]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f8dc228d4f6 in __ppoll (fds=0x562bf8c1ebc0, nfds=76,
    timeout=<optimized out>, timeout@entry=0x7ffcdbb721b0,
    sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:44
44    ../sysdeps/unix/sysv/linux/ppoll.c: Datei oder Verzeichnis nicht gefunden.
Signal        Stop    Print    Pass to program    Description
SIGUSR1       No    No    Yes        User defined signal 1
Signal        Stop    Print    Pass to program    Description
SIGPIPE       No    Yes    Yes        Broken pipe
Copying output to gdb.txt.
Copying debug output to gdb.txt.
Continuing.
[New Thread 0x7f895b7f7700 (LWP 96806)]
[Thread 0x7f895b7f7700 (LWP 96806) exited]
[New Thread 0x7f895aff6700 (LWP 96807)]
[Thread 0x7f895aff6700 (LWP 96807) exited]

Thread 1 "kvm" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50    ../sysdeps/unix/sysv/linux/raise.c: Datei oder Verzeichnis nicht gefunden.
(gdb)

However, the VM was stopped afterwards,

EDIT: the config of VM 115:

Code:
root@node1:~# cat /etc/pve/qemu-server/115.conf
agent: 1,fstrim_cloned_disks=1
balloon: 0
bios: ovmf
boot: order=scsi0
cores: 12
efidisk0: nas-ssd:115/vm-115-disk-0.qcow2,size=128K
hotplug: disk,network,usb,memory,cpu
lock: migrate
memory: 16384
name: server6
net0: virtio=46:D0:B4:77:09:62,bridge=vmbr0,tag=10
net1: virtio=32:C5:BF:A3:0C:A9,bridge=vmbr0,tag=200
numa: 1
onboot: 1
ostype: win10
protection: 1
scsi0: nas-ssd:115/vm-115-disk-1.qcow2,discard=on,size=128G,ssd=1
scsi1: nas-hdd:115/vm-115-disk-0.qcow2,backup=0,discard=on,size=11T,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=578c6004-5b43-4bae-b48c-433f7af36401
sockets: 1
startup: order=10
vmgenid: b1402872-9713-45a3-999d-e5f40a1f7ec8
 
Last edited:
yes, that is correct - the assert in qemu triggers the abort, and then you can enter thread apply all bt in the (gdb) prompt:

Code:
GNU gdb (Debian 10.1-1.7) 10.1.90.20210103-git
Copyright (C) 2021 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 1089127
[New LWP 1089128]
[New LWP 1089130]
[New LWP 1089247]
[New LWP 1089248]
[New LWP 1089249]
[New LWP 1089250]
[New LWP 1089251]
[New LWP 1089252]
[New LWP 1089253]
[New LWP 1089254]
[New LWP 1089255]
[New LWP 1089256]
[New LWP 1089257]
[New LWP 1089258]
[New LWP 1089259]
[New LWP 1089260]
[New LWP 1089261]
[New LWP 1089262]
[New LWP 1089263]
[New LWP 1089264]
[New LWP 1089265]
[New LWP 1089266]
[New LWP 1089272]
[New LWP 1090466]
[New LWP 1092726]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007fafb63fb4f6 in __ppoll (fds=0x5597fd9ff5c0, nfds=31, timeout=<optimized out>, timeout@entry=0x7ffd55566ae0, sigmask=sigmask@entry=0x0)
    at ../sysdeps/unix/sysv/linux/ppoll.c:44
44      ../sysdeps/unix/sysv/linux/ppoll.c: No such file or directory.
Signal        Stop      Print   Pass to program Description
SIGUSR1       No        No      Yes             User defined signal 1
Signal        Stop      Print   Pass to program Description
SIGPIPE       No        Yes     Yes             Broken pipe
Copying output to gdb.txt.
Copying debug output to gdb.txt.
Continuing.

Thread 1 "kvm" received signal SIGABRT, Aborted.
0x00007fafb63fb4f6 in __ppoll (fds=0x5597fd9ff5c0, nfds=31, timeout=<optimized out>, timeout@entry=0x7ffd55566ae0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:44
44      in ../sysdeps/unix/sysv/linux/ppoll.c
(gdb) thread apply all bt

Thread 26 (Thread 0x7fafa90f2700 (LWP 1092726) "iou-wrk-1089251"):
#0  0x0000000000000000 in  ()

Thread 25 (Thread 0x7fad4adff700 (LWP 1090466) "kvm"):
#0  0x00007fafb64df388 in futex_abstimed_wait_cancelable (private=0, abstime=0x7fad4adfa490, clockid=0, expected=0, futex_word=0x5597fd65ae78) at ../sysdeps/nptl/futex-internal.h:323
#1  do_futex_wait (sem=sem@entry=0x5597fd65ae78, abstime=abstime@entry=0x7fad4adfa490, clockid=0) at sem_waitcommon.c:112
#2  0x00007fafb64df4b3 in __new_sem_wait_slow (sem=sem@entry=0x5597fd65ae78, abstime=abstime@entry=0x7fad4adfa490, clockid=0) at sem_waitcommon.c:184
#3  0x00007fafb64df542 in sem_timedwait (sem=sem@entry=0x5597fd65ae78, abstime=abstime@entry=0x7fad4adfa490) at sem_timedwait.c:40
#4  0x00005597fcc37f8f in qemu_sem_timedwait (sem=sem@entry=0x5597fd65ae78, ms=ms@entry=10000) at ../util/qemu-thread-posix.c:327
#5  0x00005597fcc31e74 in worker_thread (opaque=opaque@entry=0x5597fd65ae00) at ../util/thread-pool.c:91
#6  0x00005597fcc36ff9 in qemu_thread_start (args=0x7fad4adfa530) at ../util/qemu-thread-posix.c:541
#7  0x00007fafb64d5ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#8  0x00007fafb6405def in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 24 (Thread 0x7fad499bf700 (LWP 1089272) "kvm"):
....

....

Thread 1 (Thread 0x7fafabeac080 (LWP 1089127) "kvm"):
#0  0x00007fafb63fb4f6 in __ppoll (fds=0x5597fd9ff5c0, nfds=31, timeout=<optimized out>, timeout@entry=0x7ffd55566ae0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:44
#1  0x00005597fcc34591 in ppoll (__ss=0x0, __timeout=0x7ffd55566ae0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/poll2.h:77
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=1002008185) at ../util/qemu-timer.c:348
#3  0x00005597fcc3b625 in os_host_main_loop_wait (timeout=1002008185) at ../util/main-loop.c:250
#4  main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:531
#5  0x00005597fc9ce531 in qemu_main_loop () at ../softmmu/runstate.c:726
#6  0x00005597fc71bbae in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:50
(gdb)

(and then quit to properly stop the VM process and start it again)
 
that's a pity. if you do manage, please post the traces here!
 
after Live Migration, do I get nonreleased memory or is it just wrong data?

Bash:
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
ceph-fuse: 15.2.13-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-network-perl: 0.6.2
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
 

Attachments

  • Selection_052.png
    Selection_052.png
    337 KB · Views: 18
this is not related at all to this thread - please open a new one if you have unrelated questions/issues/...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!