Backup killt VM Disk

abader

Member
Nov 1, 2020
33
6
13
53
Hallo zusammen,

ich hatte gestern Abend ein komisches Problem, beim Backup einer VM auf den PBS hat es mir die VM Disk zerrissen.

Kurzer Hintergrund zu Aufbau:

die VM ist eine Windows 2019VM. Die HDD läuft als QCOW2 auf einem GlusterFS Storage auf Basis von SSDs und ZFS.

Performance ist über alles sehr gut und die anderen Sicherungen liefen auch ohne Probleme. Für Ideen bin ich Dankbar :)

@EDIT: kleine Ergänzung: nach ca 10 Minuten Stillstand hab ich die VM via HA-Stats in den Modus Shutdown gefahren - dort ging Sie auf ERROR. Anschließend hab ich den Lock aus der Config entfernt. Da war die VM DIsk aber wohl schon im Eimer.

PS: Konnt mit Qemu-Repair die Disk reparieren - allerdings sind verzeichnisse etc beschädigt.


Snip---

Aug 10 03:05:00 pve-fal-01 pvescheduler[543219]: <root@pam> starting task UPID:pve-fal-01:000849FF:0569BB0B:62F3043C:vzdump::root@pam:
Aug 10 03:05:00 pve-fal-01 pvescheduler[543231]: INFO: starting new backup job: vzdump 904 906 903 905 902 --storage bkp-fal-01 --quiet 1 --mode snapshot --mailnotification failure --mailto support@anba-it.de
Aug 10 03:05:00 pve-fal-01 pvescheduler[543231]: INFO: Starting Backup of VM 902 (qemu)
Aug 10 03:05:01 pve-fal-01 CRON[543292]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Aug 10 03:05:01 pve-fal-01 CRON[543293]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 10 03:05:01 pve-fal-01 CRON[543292]: pam_unix(cron:session): session closed for user root
Aug 10 03:05:03 pve-fal-01 pvescheduler[543231]: VM 902 qmp command failed - VM 902 qmp command 'guest-ping' failed - got timeout
Aug 10 03:05:07 pve-fal-01 pvescheduler[543231]: INFO: Finished Backup of VM 902 (00:00:07)
Aug 10 03:05:07 pve-fal-01 pvescheduler[543231]: INFO: Starting Backup of VM 903 (qemu)
Aug 10 03:05:10 pve-fal-01 pvescheduler[543231]: INFO: Finished Backup of VM 903 (00:00:03)
Aug 10 03:05:10 pve-fal-01 pvescheduler[543231]: INFO: Starting Backup of VM 904 (qemu)
Aug 10 03:09:57 pve-fal-01 pvedaemon[323884]: <root@pam> starting task UPID:pve-fal-01:0008FBA1:056A2F35:62F30565:qmdestroy:101:root@pam:
Aug 10 03:09:57 pve-fal-01 pvedaemon[588705]: destroy VM 101: UPID:pve-fal-01:0008FBA1:056A2F35:62F30565:qmdestroy:101:root@pam:
Aug 10 03:10:00 pve-fal-01 pvedaemon[323884]: <root@pam> end task UPID:pve-fal-01:0008FBA1:056A2F35:62F30565:qmdestroy:101:root@pam: OK
Aug 10 03:10:01 pve-fal-01 cron[5609]: (*system*vzdump) RELOAD (/etc/cron.d/vzdump)
Aug 10 03:10:01 pve-fal-01 CRON[589942]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Aug 10 03:10:01 pve-fal-01 CRON[589943]: (root) CMD (test -e /run/systemd/system || SERVICE_MODE=1 /sbin/e2scrub_all -A -r)
Aug 10 03:10:01 pve-fal-01 CRON[589942]: pam_unix(cron:session): session closed for user root
Aug 10 03:10:21 pve-fal-01 pvedaemon[323883]: VM 904 qmp command failed - VM 904 qmp command 'guest-ping' failed - got timeout
Aug 10 03:10:43 pve-fal-01 pvedaemon[323884]: VM 904 qmp command failed - VM 904 qmp command 'guest-ping' failed - got timeout
Aug 10 03:11:04 pve-fal-01 pvedaemon[323883]: VM 904 qmp command failed - VM 904 qmp command 'guest-ping' failed - unable to connect to VM 904 qga socket - timeout after 31 retries
Aug 10 03:11:23 pve-fal-01 pvedaemon[323884]: VM 904 qmp command failed - VM 904 qmp command 'guest-ping' failed - unable to connect to VM 904 qga socket - timeout after 31 retries
Aug 10 03:11:27 pve-fal-01 pvedaemon[607582]: starting vnc proxy UPID:pve-fal-01:0009455E:056A5227:62F305BF:vncproxy:904:root@pam:
Aug 10 03:11:27 pve-fal-01 pvedaemon[323883]: <root@pam> starting task UPID:pve-fal-01:0009455E:056A5227:62F305BF:vncproxy:904:root@pam:
Aug 10 03:11:42 pve-fal-01 pvedaemon[323884]: <root@pam> successful auth for user 'root@pam'
Aug 10 03:11:46 pve-fal-01 pvedaemon[323883]: <root@pam> end task UPID:pve-fal-01:0009455E:056A5227:62F305BF:vncproxy:904:root@pam: OK
----------
Aug 10 03:17:34 pve-fal-01 pvedaemon[669363]: starting vnc proxy UPID:pve-fal-01:000A36B3:056AE1BB:62F3072E:vncproxy:904:root@pam:
Aug 10 03:17:48 pve-fal-01 pvedaemon[323884]: <root@pam> end task UPID:pve-fal-01:000A36B3:056AE1BB:62F3072E:vncproxy:904:root@pam: OK
Aug 10 03:18:05 pve-fal-01 pvescheduler[543231]: closing with read buffer at /usr/share/perl5/IO/Multiplex.pm line 927.
Aug 10 03:18:05 pve-fal-01 pvescheduler[543231]: VM 904 qmp command failed - interrupted by signal
Aug 10 03:18:05 pve-fal-01 pvescheduler[543231]: VM 904 qmp command failed - VM 904 qmp command 'backup' failed - Device 'drive-virtio0' has no medium
Aug 10 03:18:57 pve-fal-01 pvedaemon[323884]: <root@pam> starting task UPID:pve-fal-01:000A6A1C:056B0229:62F30781:vncproxy:904:root@pam:
Aug 10 03:18:57 pve-fal-01 pvedaemon[682524]: starting vnc proxy UPID:pve-fal-01:000A6A1C:056B0229:62F30781:vncproxy:904:root@pam:
Aug 10 03:19:25 pve-fal-01 pvedaemon[323884]: <root@pam> end task UPID:pve-fal-01:000A6A1C:056B0229:62F30781:vncproxy:904:root@pam: OK
Aug 10 03:19:36 pve-fal-01 pvedaemon[687778]: starting vnc proxy UPID:pve-fal-01:000A7EA2:056B115C:62F307A8:vncproxy:904:root@pam:
Aug 10 03:19:36 pve-fal-01 pvedaemon[323883]: <root@pam> starting task UPID:pve-fal-01:000A7EA2:056B115C:62F307A8:vncproxy:904:root@pam:
Aug 10 03:19:56 pve-fal-01 pvedaemon[323884]: <root@pam> starting task UPID:pve-fal-01:000A8909:056B1910:62F307BC:hastop:904:root@pam:
Aug 10 03:19:57 pve-fal-01 pvedaemon[323884]: <root@pam> end task UPID:pve-fal-01:000A8909:056B1910:62F307BC:hastop:904:root@pam: OK
Aug 10 03:20:05 pve-fal-01 pve-ha-lrm[692313]: stopping service vm:904 (timeout=60)
Aug 10 03:20:05 pve-fal-01 pve-ha-lrm[692315]: shutdown VM 904: UPID:pve-fal-01:000A905B:056B1C83:62F307C5:qmshutdown:904:root@pam:
Aug 10 03:20:05 pve-fal-01 pve-ha-lrm[692313]: <root@pam> starting task UPID:pve-fal-01:000A905B:056B1C83:62F307C5:qmshutdown:904:root@pam:
Aug 10 03:20:05 pve-fal-01 pve-ha-lrm[692315]: VM is locked (backup)
Aug 10 03:20:05 pve-fal-01 pve-ha-lrm[692313]: <root@pam> end task UPID:pve-fal-01:000A905B:056B1C83:62F307C5:qmshutdown:904:root@pam: VM is locked (backup)
Aug 10 03:20:05 pve-fal-01 pve-ha-lrm[692313]: unable to stop stop service vm:904 (still running)
Aug 10 03:20:05 pve-fal-01 pvedaemon[323883]: <root@pam> end task UPID:pve-fal-01:000A7EA2:056B115C:62F307A8:vncproxy:904:root@pam: OK
Aug 10 03:20:14 pve-fal-01 pvedaemon[693613]: starting termproxy UPID:pve-fal-01:000A956D:056B2010:62F307CE:vncshell::root@pam:
Aug 10 03:20:14 pve-fal-01 pvedaemon[323883]: <root@pam> starting task UPID:pve-fal-01:000A956D:056B2010:62F307CE:vncshell::root@pam:
Aug 10 03:20:14 pve-fal-01 pvedaemon[323884]: <root@pam> successful auth for user 'root@pam'
Aug 10 03:20:14 pve-fal-01 login[693623]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Aug 10 03:20:14 pve-fal-01 systemd-logind[4015]: New session 5719 of user root.
Aug 10 03:20:14 pve-fal-01 systemd[1]: Started Session 5719 of user root.
Aug 10 03:20:14 pve-fal-01 login[693630]: ROOT LOGIN on '/dev/pts/2'
Aug 10 03:20:15 pve-fal-01 pve-ha-lrm[693743]: service vm:904 is in an error state and needs manual intervention. Look up 'ERROR RECOVERY' in the documentation.
Aug 10 03:20:57 pve-fal-01 systemd-logind[4015]: Session 5719 logged out. Waiting for processes to exit.
Aug 10 03:20:57 pve-fal-01 systemd[1]: session-5719.scope: Succeeded.
Aug 10 03:20:57 pve-fal-01 systemd-logind[4015]: Removed session 5719.
Aug 10 03:20:57 pve-fal-01 pvedaemon[323883]: <root@pam> end task UPID:pve-fal-01:000A956D:056B2010:62F307CE:vncshell::root@pam: OK
Aug 10 03:21:16 pve-fal-01 pve-ha-lrm[702770]: stopping service vm:904
Aug 10 03:21:16 pve-fal-01 pve-ha-lrm[702773]: shutdown VM 904: UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:
Aug 10 03:21:16 pve-fal-01 pve-ha-lrm[702770]: <root@pam> starting task UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:
Aug 10 03:21:21 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:21:24 pve-fal-01 pvedaemon[703749]: starting vnc proxy UPID:pve-fal-01:000ABD05:056B3B49:62F30814:vncproxy:904:root@pam:
Aug 10 03:21:24 pve-fal-01 pvedaemon[323883]: <root@pam> starting task UPID:pve-fal-01:000ABD05:056B3B49:62F30814:vncproxy:904:root@pam:
Aug 10 03:21:26 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:21:31 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:21:32 pve-fal-01 pveproxy[323967]: worker exit
Aug 10 03:21:32 pve-fal-01 pveproxy[6007]: worker 323967 finished
Aug 10 03:21:32 pve-fal-01 pveproxy[6007]: starting 1 worker(s)
Aug 10 03:21:32 pve-fal-01 pveproxy[6007]: worker 705102 started
Aug 10 03:21:36 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:21:38 pve-fal-01 pvedaemon[323883]: <root@pam> starting task UPID:pve-fal-01:000AC4DB:056B40E9:62F30822:hastop:904:root@pam:
Aug 10 03:21:39 pve-fal-01 pvedaemon[323883]: <root@pam> end task UPID:pve-fal-01:000AC4DB:056B40E9:62F30822:hastop:904:root@pam: OK
Aug 10 03:21:41 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:21:46 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:21:51 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:21:56 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:21:59 pve-fal-01 pve-ha-lrm[702773]: VM 904 qmp command failed - received interrupt
Aug 10 03:21:59 pve-fal-01 pve-ha-lrm[702773]: VM quit/powerdown failed - terminating now with SIGTERM
Aug 10 03:21:59 pve-fal-01 QEMU[359585]: kvm: terminating on signal 15 from pid 702773 (task UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:)
Aug 10 03:22:01 pve-fal-01 pve-ha-lrm[702770]: Task 'UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam:' still active, waiting
Aug 10 03:22:04 pve-fal-01 pve-ha-lrm[702770]: <root@pam> end task UPID:pve-fal-01:000AB935:056B381B:62F3080C:qmshutdown:904:root@pam: unexpected status
Aug 10 03:22:04 pve-fal-01 pve-ha-lrm[702770]: unable to stop stop service vm:904 (still running)
Aug 10 03:22:06 pve-fal-01 pvedaemon[323882]: VM 904 qmp command failed - VM 904 qmp command 'query-proxmox-support' failed - unable to connect to VM 904 qmp socket - timeout after 31 retries
Aug 10 03:22:06 pve-fal-01 kernel: fwbr904i0: port 2(tap904i0) entered disabled state
Aug 10 03:22:06 pve-fal-01 kernel: fwbr904i0: port 1(fwln904i0) entered disabled state
Aug 10 03:22:06 pve-fal-01 kernel: vmbr0: port 20(fwpr904p0) entered disabled state
Aug 10 03:22:06 pve-fal-01 kernel: device fwln904i0 left promiscuous mode
Aug 10 03:22:06 pve-fal-01 kernel: fwbr904i0: port 1(fwln904i0) entered disabled state
Aug 10 03:22:06 pve-fal-01 kernel: device fwpr904p0 left promiscuous mode
Aug 10 03:22:06 pve-fal-01 kernel: vmbr0: port 20(fwpr904p0) entered disabled state
Aug 10 03:22:06 pve-fal-01 qmeventd[323050]: read: Connection reset by peer
Aug 10 03:22:06 pve-fal-01 pvestatd[5956]: VM 904 qmp command failed - VM 904 qmp command 'query-proxmox-support' failed - unable to connect to VM 904 qmp socket - No such file or directory
Aug 10 03:22:06 pve-fal-01 systemd[1]: 904.scope: Succeeded.
Aug 10 03:22:06 pve-fal-01 systemd[1]: 904.scope: Consumed 7min 13.916s CPU time.
Aug 10 03:22:06 pve-fal-01 pvedaemon[323883]: <root@pam> end task UPID:pve-fal-01:000ABD05:056B3B49:62F30814:vncproxy:904:root@pam: OK
Aug 10 03:22:07 pve-fal-01 pvestatd[5956]: status update time (6.416 seconds)
Aug 10 03:22:07 pve-fal-01 qmeventd[709935]: Starting cleanup for 904
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!