Proxmox - Server aus nach Backup

dima1002

Member
May 23, 2021
45
0
11
52
Hallo Leute,

Wir haben einen Proxmox 6.4-1 und machen ein Backup auf ein NAS Laufwerk via NFS.
Wenn das Backup fehl schlägt, weil z.B. die HDD voll ist oder sonstiges, sind am nächsten morgen die Server ausgeschaltet, bzw. abgestürzt. Hat jemand eine Idee wieso das so ist und wie man das verhindern kann?

Läuft das Backup fehlerfrei durch, laufen auch die Server.

Code:
root@pve01:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.128-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-5
pve-kernel-helper: 6.4-5
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 15.2.13-pve1~bpo10
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1
 
Hi,
Bitte posten das Syslog ( /var/log/syslog.X ), das Logs vom Zeitpunkt des Absturzes enthält und das Tasklog eines fehlgeschlagende backup.
 
Hier z.B. von der VM 107
Syslog

Aug 17 05:00:03 pve01 pvesr[3852]: Qemu Guest Agent is not running - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 05:30:03 pve01 pvesr[8532]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 05:30:03 pve01 pvesr[8532]: Qemu Guest Agent is not running - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 06:04:23 pve01 pvesr[32489]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 06:04:23 pve01 pvesr[32489]: Qemu Guest Agent is not running - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 06:30:03 pve01 pvesr[1147]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 06:30:03 pve01 pvesr[1147]: Qemu Guest Agent is not running - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 07:00:03 pve01 pvesr[28117]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 07:00:03 pve01 pvesr[28117]: Qemu Guest Agent is not running - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 07:05:01 pve01 CRON[9107]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) Aug 17 07:09:45 pve01 vzdump[15928]: INFO: Starting Backup of VM 107 (qemu) Aug 17 07:09:48 pve01 vzdump[15928]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout Aug 17 07:12:28 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:12:38 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:12:48 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:12:58 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:13:10 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:13:20 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:13:31 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:13:41 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:13:51 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:14:01 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:14:12 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:14:24 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:14:35 pve01 pvestatd[13297]: VM 107 qmp command failed - VM 107 qmp command 'query-proxmox-support' failed - unable to connect to VM 107 qmp socket - timeout after 31 retries Aug 17 07:14:35 pve01 kernel: [507316.689326] fwbr107i1: port 2(tap107i1) entered disabled state Aug 17 07:14:35 pve01 kernel: [507316.689521] fwbr107i1: port 2(tap107i1) entered disabled state Aug 17 07:14:35 pve01 NetworkManager[4597]: <info> [1629177275.6379] device (tap107i1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Aug 17 07:14:35 pve01 NetworkManager[4597]: <info> [1629177275.6380] device (fwbr107i1): bridge port tap107i1 was detached Aug 17 07:14:35 pve01 NetworkManager[4597]: <info> [1629177275.6380] device (tap107i1): released from master device fwbr107i1 Aug 17 07:14:35 pve01 nm-dispatcher: req:1 'down' [tap107i1]: new request (1 scripts) Aug 17 07:14:35 pve01 nm-dispatcher: req:1 'down' [tap107i1]: start running ordered scripts... Aug 17 07:14:36 pve01 vzdump[15928]: VM 107 qmp command failed - VM 107 qmp command 'query-backup' failed - client closed connection Aug 17 07:14:36 pve01 kernel: [507317.672235] vmbr0: port 7(tap107i0) entered disabled state Aug 17 07:14:36 pve01 kernel: [507317.672624] vmbr0: port 7(tap107i0) entered disabled state Aug 17 07:14:36 pve01 vzdump[15928]: VM 107 qmp command failed - VM 107 not running Aug 17 07:14:36 pve01 vzdump[15928]: VM 107 qmp command failed - VM 107 not running Aug 17 07:14:36 pve01 NetworkManager[4597]: <info> [1629177276.6049] device (tap107i0): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Aug 17 07:14:36 pve01 NetworkManager[4597]: <info> [1629177276.6050] device (tap107i0): released from master device vmbr0 Aug 17 07:14:36 pve01 systemd[1]: 107.scope: Succeeded. Aug 17 07:14:36 pve01 vzdump[15928]: ERROR: Backup of VM 107 failed - VM 107 not running Aug 17 07:14:37 pve01 qmeventd[4609]: Starting cleanup for 107 Aug 17 07:14:37 pve01 kernel: [507318.281501] fwbr107i1: port 1(fwln107i1) entered disabled state Aug 17 07:14:37 pve01 kernel: [507318.281548] vmbr1: port 2(fwpr107p1) entered disabled state Aug 17 07:14:37 pve01 kernel: [507318.281668] device fwln107i1 left promiscuous mode Aug 17 07:14:37 pve01 kernel: [507318.281671] fwbr107i1: port 1(fwln107i1) entered disabled state Aug 17 07:14:37 pve01 kernel: [507318.320109] device fwpr107p1 left promiscuous mode Aug 17 07:14:37 pve01 kernel: [507318.320113] vmbr1: port 2(fwpr107p1) entered disabled state Aug 17 07:14:37 pve01 NetworkManager[4597]: <info> [1629177277.2578] device (fwln107i1): released from master device fwbr107i1 Aug 17 07:14:37 pve01 NetworkManager[4597]: <info> [1629177277.2583] device (fwpr107p1): released from master device vmbr1 Aug 17 07:14:37 pve01 NetworkManager[4597]: <info> [1629177277.3927] device (fwbr107i1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Aug 17 07:14:37 pve01 nm-dispatcher: req:2 'down' [fwbr107i1]: new request (1 scripts) Aug 17 07:14:37 pve01 nm-dispatcher: req:2 'down' [fwbr107i1]: start running ordered scripts... Aug 17 07:14:37 pve01 qmeventd[4609]: Finished cleanup for 107 Aug 17 07:30:07 pve01 pvesr[8074]: 107-0: got unexpected replication job error - command 'zfs snapshot data1/vm-107-disk-0@__replicate_107-0_1629178200__' failed: got timeout Aug 17 07:31:44 pve01 zed: eid=11107 class=delay pool='data0' vdev=ata-INTEL_SSDSC2KB960G8_BTYF114407Z2960CGN-part1 size=4096 offset=103806021632 priority=3 err=0 flags=0x180880 delay=34439ms bookmark=0:0:0:8
 
Könnten Sie posten:
- Ein Task-Log der fehlgeschlagenen Backup (d.h. die Ausgabe des Befehls vzdump/backup)
- Die Ausgabe von: journalctl -b -1 (nach dem Neustart des Hosts)
- Die Ausgabe von: qm config 107

Könnten Sie auch überprüfen, ob qemu-guest-agent auf VM107 aktiviert ist? (Auf VM107: systemctl status qemu-guest-agent)
Ist pve01 ein Teil eines Clusters?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!