LCX BackUp`s OK- VM BackUp`s Fail

supertevran

New Member
Aug 19, 2023
7
0
1
Moin. Der Versuch eine VM mit openmediavault auf den pbs zu backupen schlägt leider dauerhaft fehl. Egal ob snapshot, pause oder stop. Habe das Log mal angehängt. Wer hat allenfalls Tips für mich an was es liegen könnte und was zu tun ist damit auch VM BackUp`s so schön laufen wie die LXC`s. Danke. Proxmox Backup Server 3.1.2. Proxmox VE 8.1.3.
 

Attachments

Zeige bitte deinen Backupjob, die Config der VM, den lokalen freien Speicher und die Größe deiner belegten Speichernereiche.
 
Hallo. Ich hoffe mal dass dies passende Angaben sind ;-)
 

Attachments

  • Bildschirmfoto_2023-12-03_14-07-52.png
    Bildschirmfoto_2023-12-03_14-07-52.png
    44.2 KB · Views: 15
  • Bildschirmfoto_2023-12-03_14-09-15.png
    Bildschirmfoto_2023-12-03_14-09-15.png
    100.3 KB · Views: 15
  • Bildschirmfoto_2023-12-03_14-10-41.png
    Bildschirmfoto_2023-12-03_14-10-41.png
    117.1 KB · Views: 15
Hallo supertevran da sind zwei Speicherbereiche aktiv in deiner VM. Hast Du denn überhaupt mindestens 50 TB an PBS Speicherplatz?
Sind diese ++2 TB auch mindestens lokal im Root Verzeichnis verfügbar?
 
Moin, ich habe ähnliche Probleme zu verzeichnen. Ich habe 2 Proxmox Hosts, beide hosten VMs und Container, gebackupt wird bei mir noch auf einen PBS mit 3.0 davon die latest. Ich habe, weil ich was Testen wollte einen PBS virtuell aufgesetzt. Dieser läuft auf 3.1 latest und kann keine VMs sichern. Ich habe mehrere VMs getestet mit Stop und Snapshot. VMs von 32GB bis 2TB getestet. Egal welche VM, er bricht bei mir bei "issuing fs-freeze" ab. Betroffen sind beide Proxmox Hosts. LXCs funktionieren ohne Probleme.

Host 1 - VM 1
Code:
INFO: starting new backup job: vzdump 10002005 --mode snapshot --remove 0 --storage TEST --notification-mode auto --node pve --notes-template '{{guestname}}'
INFO: Starting Backup of VM 10002005 (qemu)
INFO: Backup started at 2023-12-04 12:35:13
INFO: status = running
INFO: VM Name: PVE-Templates
INFO: include disk 'scsi0' 'VM-Storage:vm-10002005-disk-1' 64G
INFO: include disk 'scsi1' 'VM-Storage:vm-10002005-disk-2' 64G
INFO: include disk 'efidisk0' 'VM-Storage:vm-10002005-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/10002005/2023-12-04T11:35:13Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 10002005 qmp command 'backup' failed - got timeout
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 10002005 failed - VM 10002005 qmp command 'backup' failed - got timeout
INFO: Failed at 2023-12-04 12:37:19
INFO: Backup job finished with errors
INFO: notified via target `Critical-Push-Notification`
INFO: notified via target `Default-Mailserver`
TASK ERROR: job errors

Host 2 - VM 1
Code:
INFO: starting new backup job: vzdump 10002025 --mode snapshot --notification-mode auto --notes-template '{{guestname}}' --storage TEST --node pve-net --remove 0
INFO: Starting Backup of VM 10002025 (qemu)
INFO: Backup started at 2023-12-04 12:20:55
INFO: status = running
INFO: VM Name: Univention-DC01
INFO: include disk 'scsi0' 'local-lvm:vm-10002025-disk-1' 32G
INFO: include disk 'efidisk0' 'local-lvm:vm-10002025-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/10002025/2023-12-04T11:20:55Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 10002025 qmp command 'backup' failed - got timeout
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 10002025 failed - VM 10002025 qmp command 'backup' failed - got timeout
INFO: Failed at 2023-12-04 12:23:00
INFO: Backup job finished with errors
INFO: notified via target `Critical-Push-Notifications`
INFO: notified via target `Default-Mailserver`
TASK ERROR: job errors
 
Last edited:
Configs der VMs bitte immer mit qm config VMID, dann sehen wir z. B. auch direkt auf den ersten Blick ob GA eingerichtet oder nicht.

Aber wie @Falk R. schon geschrieben hat, sieht stark danach aus, dass er aktiviert ist aber nicht installiert ist.
 
Hi,
Moin. Der Versuch eine VM mit openmediavault auf den pbs zu backupen schlägt leider dauerhaft fehl. Egal ob snapshot, pause oder stop. Habe das Log mal angehängt. Wer hat allenfalls Tips für mich an was es liegen könnte und was zu tun ist damit auch VM BackUp`s so schön laufen wie die LXC`s. Danke. Proxmox Backup Server 3.1.2. Proxmox VE 8.1.3.
wie schaut denn der Task-Log auf Proxmox VE Seite aus? Welche Versionen sind installiert pveversion -v?

Hi,
Moin, ich habe ähnliche Probleme zu verzeichnen. Ich habe 2 Proxmox Hosts, beide hosten VMs und Container, gebackupt wird bei mir noch auf einen PBS mit 3.0 davon die latest. Ich habe, weil ich was Testen wollte einen PBS virtuell aufgesetzt. Dieser läuft auf 3.1 latest und kann keine VMs sichern. Ich habe mehrere VMs getestet mit Stop und Snapshot. VMs von 32GB bis 2TB getestet. Egal welche VM, er bricht bei mir bei "issuing fs-freeze" ab. Betroffen sind beide Proxmox Hosts. LXCs funktionieren ohne Probleme.

Host 1 - VM 1
Code:
INFO: starting new backup job: vzdump 10002005 --mode snapshot --remove 0 --storage TEST --notification-mode auto --node pve --notes-template '{{guestname}}'
INFO: Starting Backup of VM 10002005 (qemu)
INFO: Backup started at 2023-12-04 12:35:13
INFO: status = running
INFO: VM Name: PVE-Templates
INFO: include disk 'scsi0' 'VM-Storage:vm-10002005-disk-1' 64G
INFO: include disk 'scsi1' 'VM-Storage:vm-10002005-disk-2' 64G
INFO: include disk 'efidisk0' 'VM-Storage:vm-10002005-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/10002005/2023-12-04T11:35:13Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 10002005 qmp command 'backup' failed - got timeout
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 10002005 failed - VM 10002005 qmp command 'backup' failed - got timeout
INFO: Failed at 2023-12-04 12:37:19
INFO: Backup job finished with errors
INFO: notified via target `Critical-Push-Notification`
INFO: notified via target `Default-Mailserver`
TASK ERROR: job errors

Host 2 - VM 1
Code:
INFO: starting new backup job: vzdump 10002025 --mode snapshot --notification-mode auto --notes-template '{{guestname}}' --storage TEST --node pve-net --remove 0
INFO: Starting Backup of VM 10002025 (qemu)
INFO: Backup started at 2023-12-04 12:20:55
INFO: status = running
INFO: VM Name: Univention-DC01
INFO: include disk 'scsi0' 'local-lvm:vm-10002025-disk-1' 32G
INFO: include disk 'efidisk0' 'local-lvm:vm-10002025-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/10002025/2023-12-04T11:20:55Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 10002025 qmp command 'backup' failed - got timeout
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 10002025 failed - VM 10002025 qmp command 'backup' failed - got timeout
INFO: Failed at 2023-12-04 12:23:00
INFO: Backup job finished with errors
INFO: notified via target `Critical-Push-Notifications`
INFO: notified via target `Default-Mailserver`
TASK ERROR: job errors
es bricht aber nicht beim Freeze ab, sondern beim Initialisieren vom Backup. Wie schaut die Auslastung vom virtuellen PBS aus? Was steht auf PBS-Seite in den Logs?
 
es bricht aber nicht beim Freeze ab, sondern beim Initialisieren vom Backup. Wie schaut die Auslastung vom virtuellen PBS aus? Was steht auf PBS-Seite in den Logs?

Das Abbrechen beim Freeze dachte ich, weil er für 2-3 Minuten bei FS-Freeze hängt, und dann den restlichen Output schmeißt.

Auf dem Virtuellen PBS ist keine Spur von einem Backup. Qemu-Guest-Agent ist auf PBS und zu sicherender VM installiert und aktiv. Die Auslastung während des Backups oder im Idle?

Während des Backups ist die Auslastung des PBS nur 1-2 Prozent höher, Diskio bei 0
 
Last edited:
Configs der VMs bitte immer mit qm config VMID, dann sehen wir z. B. auch direkt auf den ersten Blick ob GA eingerichtet oder nicht.

Aber wie @Falk R. schon geschrieben hat, sieht stark danach aus, dass er aktiviert ist aber nicht installiert ist.
Mit dem Guest Agent hat dass m.E. nichts zu tun, da ein Backup auf den Physischen PBS ohne Probleme läuft

Host 2 VM 1 -- Guest Agent Aktiv und installiert

Code:
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
efidisk0: VM-Storage:vm-10002005-disk-0,efitype=4m,size=4M
ide2: none,media=cdrom
machine: q35
memory: 16384
meta: creation-qemu=8.0.2,ctime=1696703952
name: PVE-Templates
net0: virtio=00:15:5D:14:35:5F,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: VM-Storage:vm-10002005-disk-1,discard=on,iothread=1,size=64G
scsi1: VM-Storage:vm-10002005-disk-2,discard=on,iothread=1,size=64G
scsihw: virtio-scsi-single
smbios1: uuid=28ae727e-****-****-****-1a4079e62641
sockets: 1
startup: order=10002004,up=60,down=180
tags: Homelab
vmgenid: 6170159d-****-****-****-fcf1cd05be7a

Host 1 VM 1 -- Guest Agent Aktiv und installiert

Code:
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
efidisk0: local-lvm:vm-10002025-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=8.0.2,ctime=1699549871
name: Univention-DC01
net0: virtio=BC:24:11:D4:28:62,bridge=vmbr10,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-10002025-disk-1,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=064e5910-****-****-****-538a6e2e3f55
sockets: 1
startup: order=10002025,up=30,down=60
vmgenid: 2f3670a4-****-****-****-e9f90b74009f
 
Last edited:
Das Abbrechen beim Freeze dachte ich, weil er für 2-3 Minuten bei FS-Freeze hängt, und dann den restlichen Output schmeißt.
Okay. Der Fehler ist allerdings beim backup Kommando. Dieses wird zwischen Freeze und Thaw gemacht um das Backup zu initialisieren und hat eben ein Timeout von ca. 2 Minuten. Das Freeze-Kommando selbst scheint ohne Fehler durchzulaufen.
ERROR: VM 10002005 qmp command 'backup' failed - got timeout

Auf dem Virtuellen PBS ist keine Spur von einem Backup. Qemu-Guest-Agent ist auf PBS und zu sicherender VM installiert und aktiv. Die Auslastung während des Backups oder im Idle?

Während des Backups ist die Auslastung des PBS nur 1-2 Prozent höher, Diskio bei 0
Hmm, dann scheint der Fehler schon beim Verbinden mit dem Server stattzufinden. Hat die VM die gebackupt wird irgendwas mit dem Netzwerk am Hut, z.B. Firewall oder so? Wie ist es wenn Du eine "leere" Test-VM ohne Disken erstellst. Geht da das Backup?
 
Hmm, dann scheint der Fehler schon beim Verbinden mit dem Server stattzufinden. Hat die VM die gebackupt wird irgendwas mit dem Netzwerk am Hut, z.B. Firewall oder so? Wie ist es wenn Du eine "leere" Test-VM ohne Disken erstellst. Geht da das Backup?

Nicht direkt, das eine ist ein Domaincontroller mit Univention, das andere ist ein Virtueller Proxmox, der mir automatisiert die Container Templates einmal die Woche aktuallisiert.

Backup einer VM ohne Disks
Code:
INFO: starting new backup job: vzdump 10000 --node pve-net --storage TEST-LAN-Backups --remove 0 --notes-template '{{guestname}}' --notification-mode auto --mode snapshot
INFO: Starting Backup of VM 10000 (qemu)
INFO: Backup started at 2023-12-07 13:38:28
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: Test-VM
INFO: include disk 'efidisk0' 'local-lvm:vm-10000-disk-0' 4M
INFO: creating Proxmox Backup Server archive 'vm/10000/2023-12-07T12:38:28Z'
INFO: starting kvm to execute backup task
ERROR: VM 10000 qmp command 'backup' failed - got timeout
INFO: aborting backup job
INFO: stopping kvm after backup task
ERROR: Backup of VM 10000 failed - VM 10000 qmp command 'backup' failed - got timeout
INFO: Failed at 2023-12-07 13:40:36
INFO: Backup job finished with errors
INFO: notified via target `Default-Mailserver`
INFO: notified via target `Critical-Push-Notifications`
TASK ERROR: job errors

qm config id 10000
Code:
bios: ovmf
boot: order=ide2;net0
cores: 4
cpu: host
efidisk0: local-lvm:vm-10000-disk-0,efitype=4m,size=4M
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=8.1.2,ctime=1701952679
name: Test-VM
net0: virtio=BC:24:11:04:2F:2D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=82ca216d-****-****-****-0545696d9b38
sockets: 1
vmgenid: 4d428bbe-****-****-****-bcb1d22b4e1e

In den beiden Graphen auf dem Virtuellen PBS sieht man, das eine kurze Kommunikation zum Backupzeitpunkt stattfindet, es existiert aber kein Logeintrag. Auch in den Syslogs ist nichts zusehen
Code:
Dec 07 13:15:21 pbs-test proxmox-backup-[644]: pbs-test proxmox-backup-proxy[644]: write rrd data back to disk
Dec 07 13:15:22 pbs-test proxmox-backup-[644]: pbs-test proxmox-backup-proxy[644]: starting rrd data sync
Dec 07 13:15:22 pbs-test proxmox-backup-[644]: pbs-test proxmox-backup-proxy[644]: rrd journal successfully committed (25 files in 0.015 seconds)
Dec 07 13:17:01 pbs-test CRON[29214]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 07 13:17:01 pbs-test CRON[29215]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Dec 07 13:17:01 pbs-test CRON[29214]: pam_unix(cron:session): session closed for user root
Dec 07 13:45:22 pbs-test proxmox-backup-[644]: pbs-test proxmox-backup-proxy[644]: write rrd data back to disk
Dec 07 13:45:22 pbs-test proxmox-backup-[644]: pbs-test proxmox-backup-proxy[644]: starting rrd data sync
Dec 07 13:45:22 pbs-test proxmox-backup-[644]: pbs-test proxmox-backup-proxy[644]: rrd journal successfully committed (25 files in 0.015 seconds
)
Screenshot 2023-12-07 134506.png

Hier die Syslogs zum Zeitpunkt des Backups vom Proxmox Host:
Code:
Dec 07 13:38:28 pve-net pvedaemon[1311]: <root@pam> starting task UPID:pve-net:00314A08:00FD1FEF:6571BCC4:vzdump:10000:root@pam:
Dec 07 13:38:28 pve-net pvedaemon[3230216]: INFO: starting new backup job: vzdump 10000 --node pve-net --storage TEST-LAN-Backups --remove 0 --notes-template '{{guestname}}' --notification-mode auto --mode snapshot
Dec 07 13:38:28 pve-net pvedaemon[3230216]: INFO: Starting Backup of VM 10000 (qemu)
Dec 07 13:38:29 pve-net systemd[1]: Started 10000.scope.
Dec 07 13:38:29 pve-net kernel: tap10000i0: entered promiscuous mode
Dec 07 13:38:30 pve-net kernel: vmbr0: port 10(fwpr10000p0) entered blocking state
Dec 07 13:38:30 pve-net kernel: vmbr0: port 10(fwpr10000p0) entered disabled state
Dec 07 13:38:30 pve-net kernel: fwpr10000p0: entered allmulticast mode
Dec 07 13:38:30 pve-net kernel: fwpr10000p0: entered promiscuous mode
Dec 07 13:38:30 pve-net kernel: vmbr0: port 10(fwpr10000p0) entered blocking state
Dec 07 13:38:30 pve-net kernel: vmbr0: port 10(fwpr10000p0) entered forwarding state
Dec 07 13:38:30 pve-net kernel: fwbr10000i0: port 1(fwln10000i0) entered blocking state
Dec 07 13:38:30 pve-net kernel: fwbr10000i0: port 1(fwln10000i0) entered disabled state
Dec 07 13:38:30 pve-net kernel: fwln10000i0: entered allmulticast mode
Dec 07 13:38:30 pve-net kernel: fwln10000i0: entered promiscuous mode
Dec 07 13:38:30 pve-net kernel: fwbr10000i0: port 1(fwln10000i0) entered blocking state
Dec 07 13:38:30 pve-net kernel: fwbr10000i0: port 1(fwln10000i0) entered forwarding state
Dec 07 13:38:30 pve-net kernel: fwbr10000i0: port 2(tap10000i0) entered blocking state
Dec 07 13:38:30 pve-net kernel: fwbr10000i0: port 2(tap10000i0) entered disabled state
Dec 07 13:38:30 pve-net kernel: tap10000i0: entered allmulticast mode
Dec 07 13:38:30 pve-net kernel: fwbr10000i0: port 2(tap10000i0) entered blocking state
Dec 07 13:38:30 pve-net kernel: fwbr10000i0: port 2(tap10000i0) entered forwarding state
Dec 07 13:38:40 pve-net pvedaemon[1311]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - got timeout
Dec 07 13:38:44 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:38:49 pve-net pvestatd[1284]: status update time (13.380 seconds)
Dec 07 13:38:57 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:39:01 pve-net pvestatd[1284]: status update time (12.402 seconds)
Dec 07 13:39:05 pve-net pvedaemon[1311]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:39:09 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:39:14 pve-net pvestatd[1284]: status update time (13.097 seconds)
Dec 07 13:39:22 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:39:30 pve-net pvestatd[1284]: status update time (15.196 seconds)
Dec 07 13:39:31 pve-net pvedaemon[1311]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:39:38 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:39:44 pve-net pvestatd[1284]: status update time (14.752 seconds)
Dec 07 13:39:52 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:39:57 pve-net pvedaemon[1311]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:39:59 pve-net pvestatd[1284]: status update time (14.655 seconds)
Dec 07 13:40:07 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:40:14 pve-net pvestatd[1284]: status update time (15.207 seconds)
Dec 07 13:40:22 pve-net pvedaemon[1313]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:40:22 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:40:26 pve-net pvestatd[1284]: status update time (11.924 seconds)
Dec 07 13:40:34 pve-net pvestatd[1284]: VM 10000 qmp command failed - VM 10000 qmp command 'query-proxmox-support' failed - unable to connect to VM 10000 qmp socket - timeout after 51 retries
Dec 07 13:40:35 pve-net pvedaemon[3230216]: VM 10000 qmp command failed - VM 10000 qmp command 'backup' failed - got timeout
 
Last edited:
Könntest du Debugger und Debug-Symbole installieren mit
Code:
apt install pve-qemu-kvm-dbgsym gdb libproxmox-backup-qemu0-dbgsym
und dann sobald es steckt, mit
Code:
gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/10000.pid)
Debug-Traces generieren?

Wenn wir Glück haben, sehen wir da genauer wo es hängt.
 
Könntest du Debugger und Debug-Symbole installieren mit
Code:
apt install pve-qemu-kvm-dbgsym gdb libproxmox-backup-qemu0-dbgsym
und dann sobald es steckt, mit
Code:
gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/10000.pid)
Debug-Traces generieren?

Wenn wir Glück haben, sehen wir da genauer wo es hängt.

Ausgabe während eines Backupversuches:
Code:
[New LWP 3280145]
[New LWP 3280214]
[New LWP 3280215]
[New LWP 3280216]
[New LWP 3280217]
[New LWP 3280218]
[New LWP 3280221]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f25af155156 in __ppoll (fds=0x55c99bb7ee00, nfds=9, timeout=<optimized out>, timeout@entry=0x7ffc74612aa0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:42
42      ../sysdeps/unix/sysv/linux/ppoll.c: No such file or directory.

Thread 8 (Thread 0x7f24993bf6c0 (LWP 3280221) "vnc_worker"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99c395f98) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99c395f98, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99c395f98, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99c395fa8, cond=0x55c99c395f70) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99c395f70, mutex=mutex@entry=0x55c99c395fa8) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99c395f70, mutex=0x55c99c395fa8, file=0x55c99a8d9cf4 "../ui/vnc-jobs.c", line=248) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a2a2eeb in vnc_worker_thread_loop (queue=queue@entry=0x55c99c395f70) at ../ui/vnc-jobs.c:248
#7  0x000055c99a2a3b88 in vnc_worker_thread (arg=arg@entry=0x55c99c395f70) at ../ui/vnc-jobs.c:362
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99c698a50) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 7 (Thread 0x7f25a987d6c0 (LWP 3280218) "CPU 3/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99bf2c82c) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99bf2c82c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99bf2c82c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99b150b00 <qemu_global_mutex>, cond=0x55c99bf2c800) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99bf2c800, mutex=mutex@entry=0x55c99b150b00 <qemu_global_mutex>) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99bf2c800, mutex=0x55c99b150b00 <qemu_global_mutex>, file=0x55c99a98dd33 "../softmmu/cpus.c", line=424) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a47befe in qemu_wait_io_event (cpu=cpu@entry=0x55c99bf235c0) at ../softmmu/cpus.c:424
#7  0x000055c99a67d310 in kvm_vcpu_thread_fn (arg=arg@entry=0x55c99bf235c0) at ../accel/kvm/kvm-accel-ops.c:56
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bf2c840) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 6 (Thread 0x7f25aa07e6c0 (LWP 3280217) "CPU 2/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99bf22bdc) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99bf22bdc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99bf22bdc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99b150b00 <qemu_global_mutex>, cond=0x55c99bf22bb0) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99bf22bb0, mutex=mutex@entry=0x55c99b150b00 <qemu_global_mutex>) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99bf22bb0, mutex=0x55c99b150b00 <qemu_global_mutex>, file=0x55c99a98dd33 "../softmmu/cpus.c", line=424) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a47befe in qemu_wait_io_event (cpu=cpu@entry=0x55c99bf19ab0) at ../softmmu/cpus.c:424
#7  0x000055c99a67d310 in kvm_vcpu_thread_fn (arg=arg@entry=0x55c99bf19ab0) at ../accel/kvm/kvm-accel-ops.c:56
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bf22bf0) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 5 (Thread 0x7f25aa87f6c0 (LWP 3280216) "CPU 1/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99bf190cc) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99bf190cc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99bf190cc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99b150b00 <qemu_global_mutex>, cond=0x55c99bf190a0) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99bf190a0, mutex=mutex@entry=0x55c99b150b00 <qemu_global_mutex>) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99bf190a0, mutex=0x55c99b150b00 <qemu_global_mutex>, file=0x55c99a98dd33 "../softmmu/cpus.c", line=424) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a47befe in qemu_wait_io_event (cpu=cpu@entry=0x55c99bf100a0) at ../softmmu/cpus.c:424
#7  0x000055c99a67d310 in kvm_vcpu_thread_fn (arg=arg@entry=0x55c99bf100a0) at ../accel/kvm/kvm-accel-ops.c:56
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bf190e0) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 4 (Thread 0x7f25ab8c66c0 (LWP 3280215) "CPU 0/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99bee576c) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99bee576c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99bee576c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99b150b00 <qemu_global_mutex>, cond=0x55c99bee5740) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99bee5740, mutex=mutex@entry=0x55c99b150b00 <qemu_global_mutex>) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99bee5740, mutex=0x55c99b150b00 <qemu_global_mutex>, file=0x55c99a98dd33 "../softmmu/cpus.c", line=424) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a47befe in qemu_wait_io_event (cpu=cpu@entry=0x55c99bedeba0) at ../softmmu/cpus.c:424
#7  0x000055c99a67d310 in kvm_vcpu_thread_fn (arg=arg@entry=0x55c99bedeba0) at ../accel/kvm/kvm-accel-ops.c:56
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bee5780) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 3 (Thread 0x7f25ac42c4c0 (LWP 3280214) "vhost-3280144"):
#0  0x0000000000000000 in ?? ()
Backtrace stopped: Cannot access memory at address 0x0

Thread 2 (Thread 0x7f25ac1c86c0 (LWP 3280145) "call_rcu"):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x000055c99a815c8a in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at ./include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x55c99b168928 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:464
#3  0x000055c99a81f592 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:278
#4  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bb78ba0) at ../util/qemu-thread-posix.c:541
#5  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 1 (Thread 0x7f25ac42c4c0 (LWP 3280144) "kvm"):
#0  0x00007f25af155156 in __ppoll (fds=0x55c99bb7ee00, nfds=9, timeout=<optimized out>, timeout@entry=0x7ffc74612aa0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:42
#1  0x000055c99a82ad8e in ppoll (__ss=0x0, __timeout=0x7ffc74612aa0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/poll2.h:64
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=2999037447) at ../util/qemu-timer.c:351
#3  0x000055c99a82867e in os_host_main_loop_wait (timeout=2999037447) at ../util/main-loop.c:308
#4  main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:592
#5  0x000055c99a486127 in qemu_main_loop () at ../softmmu/runstate.c:732
#6  0x000055c99a6861e6 in qemu_default_main () at ../softmmu/main.c:37
#7  0x00007f25af0801ca in __libc_start_call_main (main=main@entry=0x55c99a277460 <main>, argc=argc@entry=57, argv=argv@entry=0x7ffc74612cb8) at ../sysdeps/nptl/libc_start_call_main.h:58
#8  0x00007f25af080285 in __libc_start_main_impl (main=0x55c99a277460 <main>, argc=57, argv=0x7ffc74612cb8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc74612ca8) at ../csu/libc-start.c:360
#9  0x000055c99a279081 in _start ()
[Inferior 1 (process 3280144) detached]
 
Code:
efidisk0: local-lvm:vm-10000-disk-0,efitype=4m,size=4M
...
net0: virtio=BC:24:11:04:2F:2D,bridge=vmbr0,firewall=1
Was, wenn Du diese Disk auch weglässt? Was wenn Du das net0 weglässt?

Ausgabe während eines Backupversuches:
Code:
[New LWP 3280145]
[New LWP 3280214]
[New LWP 3280215]
[New LWP 3280216]
[New LWP 3280217]
[New LWP 3280218]
[New LWP 3280221]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f25af155156 in __ppoll (fds=0x55c99bb7ee00, nfds=9, timeout=<optimized out>, timeout@entry=0x7ffc74612aa0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:42
42      ../sysdeps/unix/sysv/linux/ppoll.c: No such file or directory.

Thread 8 (Thread 0x7f24993bf6c0 (LWP 3280221) "vnc_worker"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99c395f98) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99c395f98, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99c395f98, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99c395fa8, cond=0x55c99c395f70) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99c395f70, mutex=mutex@entry=0x55c99c395fa8) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99c395f70, mutex=0x55c99c395fa8, file=0x55c99a8d9cf4 "../ui/vnc-jobs.c", line=248) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a2a2eeb in vnc_worker_thread_loop (queue=queue@entry=0x55c99c395f70) at ../ui/vnc-jobs.c:248
#7  0x000055c99a2a3b88 in vnc_worker_thread (arg=arg@entry=0x55c99c395f70) at ../ui/vnc-jobs.c:362
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99c698a50) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 7 (Thread 0x7f25a987d6c0 (LWP 3280218) "CPU 3/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99bf2c82c) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99bf2c82c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99bf2c82c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99b150b00 <qemu_global_mutex>, cond=0x55c99bf2c800) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99bf2c800, mutex=mutex@entry=0x55c99b150b00 <qemu_global_mutex>) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99bf2c800, mutex=0x55c99b150b00 <qemu_global_mutex>, file=0x55c99a98dd33 "../softmmu/cpus.c", line=424) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a47befe in qemu_wait_io_event (cpu=cpu@entry=0x55c99bf235c0) at ../softmmu/cpus.c:424
#7  0x000055c99a67d310 in kvm_vcpu_thread_fn (arg=arg@entry=0x55c99bf235c0) at ../accel/kvm/kvm-accel-ops.c:56
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bf2c840) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 6 (Thread 0x7f25aa07e6c0 (LWP 3280217) "CPU 2/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99bf22bdc) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99bf22bdc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99bf22bdc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99b150b00 <qemu_global_mutex>, cond=0x55c99bf22bb0) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99bf22bb0, mutex=mutex@entry=0x55c99b150b00 <qemu_global_mutex>) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99bf22bb0, mutex=0x55c99b150b00 <qemu_global_mutex>, file=0x55c99a98dd33 "../softmmu/cpus.c", line=424) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a47befe in qemu_wait_io_event (cpu=cpu@entry=0x55c99bf19ab0) at ../softmmu/cpus.c:424
#7  0x000055c99a67d310 in kvm_vcpu_thread_fn (arg=arg@entry=0x55c99bf19ab0) at ../accel/kvm/kvm-accel-ops.c:56
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bf22bf0) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 5 (Thread 0x7f25aa87f6c0 (LWP 3280216) "CPU 1/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99bf190cc) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99bf190cc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99bf190cc, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99b150b00 <qemu_global_mutex>, cond=0x55c99bf190a0) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99bf190a0, mutex=mutex@entry=0x55c99b150b00 <qemu_global_mutex>) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99bf190a0, mutex=0x55c99b150b00 <qemu_global_mutex>, file=0x55c99a98dd33 "../softmmu/cpus.c", line=424) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a47befe in qemu_wait_io_event (cpu=cpu@entry=0x55c99bf100a0) at ../softmmu/cpus.c:424
#7  0x000055c99a67d310 in kvm_vcpu_thread_fn (arg=arg@entry=0x55c99bf100a0) at ../accel/kvm/kvm-accel-ops.c:56
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bf190e0) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 4 (Thread 0x7f25ab8c66c0 (LWP 3280215) "CPU 0/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c99bee576c) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c99bee576c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f25af0dee0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c99bee576c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f25af0e1468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c99b150b00 <qemu_global_mutex>, cond=0x55c99bee5740) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c99bee5740, mutex=mutex@entry=0x55c99b150b00 <qemu_global_mutex>) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c99a81561b in qemu_cond_wait_impl (cond=0x55c99bee5740, mutex=0x55c99b150b00 <qemu_global_mutex>, file=0x55c99a98dd33 "../softmmu/cpus.c", line=424) at ../util/qemu-thread-posix.c:225
#6  0x000055c99a47befe in qemu_wait_io_event (cpu=cpu@entry=0x55c99bedeba0) at ../softmmu/cpus.c:424
#7  0x000055c99a67d310 in kvm_vcpu_thread_fn (arg=arg@entry=0x55c99bedeba0) at ../accel/kvm/kvm-accel-ops.c:56
#8  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bee5780) at ../util/qemu-thread-posix.c:541
#9  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 3 (Thread 0x7f25ac42c4c0 (LWP 3280214) "vhost-3280144"):
#0  0x0000000000000000 in ?? ()
Backtrace stopped: Cannot access memory at address 0x0

Thread 2 (Thread 0x7f25ac1c86c0 (LWP 3280145) "call_rcu"):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x000055c99a815c8a in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at ./include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x55c99b168928 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:464
#3  0x000055c99a81f592 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:278
#4  0x000055c99a814b08 in qemu_thread_start (args=0x55c99bb78ba0) at ../util/qemu-thread-posix.c:541
#5  0x00007f25af0e2044 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007f25af16261c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 1 (Thread 0x7f25ac42c4c0 (LWP 3280144) "kvm"):
#0  0x00007f25af155156 in __ppoll (fds=0x55c99bb7ee00, nfds=9, timeout=<optimized out>, timeout@entry=0x7ffc74612aa0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:42
#1  0x000055c99a82ad8e in ppoll (__ss=0x0, __timeout=0x7ffc74612aa0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/poll2.h:64
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=2999037447) at ../util/qemu-timer.c:351
#3  0x000055c99a82867e in os_host_main_loop_wait (timeout=2999037447) at ../util/main-loop.c:308
#4  main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:592
#5  0x000055c99a486127 in qemu_main_loop () at ../softmmu/runstate.c:732
#6  0x000055c99a6861e6 in qemu_default_main () at ../softmmu/main.c:37
#7  0x00007f25af0801ca in __libc_start_call_main (main=main@entry=0x55c99a277460 <main>, argc=argc@entry=57, argv=argv@entry=0x7ffc74612cb8) at ../sysdeps/nptl/libc_start_call_main.h:58
#8  0x00007f25af080285 in __libc_start_main_impl (main=0x55c99a277460 <main>, argc=57, argv=0x7ffc74612cb8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc74612ca8) at ../csu/libc-start.c:360
#9  0x000055c99a279081 in _start ()
[Inferior 1 (process 3280144) detached]
Leider gibt es da nichts wirklich auffälliges (außer möglicherweise, dass Thread 3 in vhost ist, daher mein Vorschlag, versuchen das net0 wegzulassen).
 
Wenn gar keine Disken da sind, läuft das Backup nicht über die libproxmox-backup-qemu0-Bibliothek, sondern direkt über den Client. Die Meldung starting new backup on datastore auf PBS-Seite kommt schon beim Initialisieren vom Backup-Protokoll, also scheint es schon ganz am Anfang Probleme zu geben.

Wie genau ist die PBS-Storage konfiguriert? Welche Versionen sind installiert, i.e. pveversion -v und proxmox-backup-manager versions --verbose. Das Problem lässt sich bei mir nicht nachstellen.
 
Wenn gar keine Disken da sind, läuft das Backup nicht über die libproxmox-backup-qemu0-Bibliothek, sondern direkt über den Client. Die Meldung starting new backup on datastore auf PBS-Seite kommt schon beim Initialisieren vom Backup-Protokoll, also scheint es schon ganz am Anfang Probleme zu geben.

Wie genau ist die PBS-Storage konfiguriert? Welche Versionen sind installiert, i.e. pveversion -v und proxmox-backup-manager versions --verbose. Das Problem lässt sich bei mir nicht nachstellen.
Storage Config

pbs: Y-LAN-Backups
datastore Backup
server 10.10.2.15
content backup
fingerprint *****************************************************************
namespace Network/LAN
prune-backups keep-all=1
username backup@pbs

pveversion -v
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-6-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-6
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve4

proxmox-backup-manager versions --verbose
Code:
proxmox-backup                     3.0.1        running kernel: 6.5.11-6-pve
proxmox-backup-server              3.1.2-1      running version: 3.1.2     
proxmox-kernel-helper              8.1.0                                   
pve-kernel-6.2                     8.0.5                                   
proxmox-kernel-6.5                 6.5.11-7                                
proxmox-kernel-6.5.11-6-pve-signed 6.5.11-6                                
proxmox-kernel-6.2.16-19-pve       6.2.16-19                               
proxmox-kernel-6.2                 6.2.16-20                               
pve-kernel-6.2.16-3-pve            6.2.16-3                                
ifupdown2                          3.2.0-1+pmx7                            
libjs-extjs                        7.0.0-4                                 
proxmox-backup-docs                3.1.2-1                                 
proxmox-backup-client              3.1.2-1                                 
proxmox-mail-forward               0.2.2                                   
proxmox-mini-journalreader         1.4.0                                   
proxmox-offline-mirror-helper      unknown                                 
proxmox-widget-toolkit             4.1.3                                   
pve-xtermjs                        5.3.0-2                                 
smartmontools                      7.3-pve1                                
zfsutils-linux                     2.2.2-pve1
 
Last edited:
Zur Info für Reproduktion, der Aufbau ist wie folgt:

PVE-1 (HP DL360 G9 2x Xeon E5-2650L-v3, 128GB, 2x240GB HW-R1 Teamgroup mit EXT4, 6x1,92TB HW-R10 Samsung PM893, VM-Store, LVM-Thin)
- VM 10002005 (PVE-Templates)
- VM 10002015 (PBS-V)


PVE-2 (HP DL360 G9 1x Xeon E5-2630L-v3, 16GB, 2x240GB HW-R1 870 EVO mit EXT4 System und VM-Storage)
- VM 10002025
- VM 10000

PBS-P (HP DL360 G9 2x Xeon E5-2630L-v3, 64GB, 2x240GB Kingston ZFS R1 System, 4x2TB 970 EVO Plus ZFS R10)

Netzwerk der Server 10GB DAC-Kabel.
Backup von VMs von PVE-1 auf PBS-P Möglich
Backup von LXC von PVE-1 auf PPB-P Möglich

Backup von VMs von PVE-1 auf PBS-V NICHT Möglich
Backup von LXC von PVE-1 auf PPB-P Möglich

Backup von VMs von PVE-2 auf PBS-P Möglich
Backup von LXC von PVE-2 auf PPB-P Möglich

Backup von VMs von PVE-2 auf PBS-V NICHT Möglich
Backup von LXC von PVE-2 auf PPB-P Möglich
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!