PBS unter PVE "stürzt ab"?

DasMoritz

Member
Jun 6, 2022
108
5
18
Hallo zusammen,

ich betreibe seit kurzem einen PVE auf einem HP Microserver Gen 8, auf der PVE Umgebung rennt auch der PBS.
Meine Konstellation ist folgende:

pve1 Umgebung auf einem HP Elitedesk mit diversen VM's (Ubuntu, XPenology, Windows, etc.)
pve2 Umgebung auf dem HP Microserver Gen 8, darauf läuft nur PBS.

Ich habe nun ein erstes Backup von VM's auf der pve1 in Richtung pve2 (PBS) machen wollen, nach einigen Minuten bricht das Backup jedoch ab und der PBS wird gestoppt.

Das Log zum Backup auf dem pve1 sieht wie folgt aus:
Code:
()
INFO: starting new backup job: vzdump 101 --notes-template '{{guestname}}' --node pve --remove 0 --mode snapshot --storage PBS
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2023-09-14 08:12:58
INFO: status = running
INFO: VM Name: Synology
INFO: include disk 'sata0' 'ZFS-Pool:vm-101-disk-0' 1G
INFO: include disk 'sata1' 'ZFS-Pool:vm-101-disk-1' 11G
INFO: include disk 'sata2' 'ZFS-Pool:vm-101-disk-2' 6000G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/101/2023-09-14T06:12:58Z'
INFO: started backup task 'f9cd17df-c353-4aa7-b36c-32b0bd818754'
INFO: resuming VM again
INFO: sata0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: sata1: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: sata2: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO:   0% (800.0 MiB of 5.9 TiB) in 3s, read: 266.7 MiB/s, write: 124.0 MiB/s
INFO:   1% (60.1 GiB of 5.9 TiB) in 7m 54s, read: 129.0 MiB/s, write: 120.0 MiB/s
INFO:   2% (120.3 GiB of 5.9 TiB) in 17m 9s, read: 111.0 MiB/s, write: 110.9 MiB/s
INFO:   3% (180.4 GiB of 5.9 TiB) in 26m 31s, read: 109.6 MiB/s, write: 109.1 MiB/s
INFO:   3% (221.5 GiB of 5.9 TiB) in 48m 29s, read: 31.9 MiB/s, write: 31.9 MiB/s
ERROR: backup write data failed: command error: protocol canceled
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 101 failed - backup write data failed: command error: protocol canceled
INFO: Failed at 2023-09-14 09:01:27
INFO: Backup job finished with errors
TASK ERROR: job errors

Ein Log auf dem pve2 bzw pbs finde ich leider nicht :-(

Hat jemand da einen guten Tipp?

Danke,
Moritz
 
in der PBS GUI sollte es einen zugehoerigen task geben der eventuell mehr auskunft gibt. sonst waeren noch die genauen versionen, storage auf beiden seiten sowie der system log fuer die dauer des backups ("journalctl --since .. --until ..", sowohl von PVE als auch PBS host) interessant.
 
Moin,

Danke dir!

Auf dem PBS sehe ich dazu folgendes:
Code:
2023-09-14T08:12:58+02:00: starting new backup on datastore 'Datastore': "vm/101/2023-09-14T06:12:58Z"
2023-09-14T08:12:58+02:00: GET /previous: 400 Bad Request: no valid previous backup
2023-09-14T08:12:58+02:00: created new fixed index 1 ("vm/101/2023-09-14T06:12:58Z/drive-sata0.img.fidx")
2023-09-14T08:12:58+02:00: created new fixed index 2 ("vm/101/2023-09-14T06:12:58Z/drive-sata1.img.fidx")
2023-09-14T08:12:58+02:00: created new fixed index 3 ("vm/101/2023-09-14T06:12:58Z/drive-sata2.img.fidx")
2023-09-14T08:12:58+02:00: add blob "/vm/101/2023-09-14T06:12:58Z/qemu-server.conf.blob" (407 bytes, comp: 407)

pve1: Virtual Environment 7.3-3
Code:
root@pve:~# journalctl --since 09:00:00 --until 10:00:00
-- Journal begins at Sun 2023-02-05 08:01:37 CET, ends at Thu 2023-09-14 10:53:28 CEST. --
Sep 14 09:00:01 pve pvedaemon[2407833]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:07 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:10 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:00:10 pve pvestatd[3478]: status update time (11.237 seconds)
Sep 14 09:00:18 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:10 pve pvestatd[3478]: status update time (11.237 seconds)
Sep 14 09:00:18 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:21 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:00:22 pve pvestatd[3478]: status update time (11.229 seconds)
Sep 14 09:00:27 pve pvedaemon[2201620]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:30 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:32 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:00:32 pve pvestatd[3478]: status update time (10.556 seconds)
Sep 14 09:00:40 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:43 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:00:43 pve pvestatd[3478]: status update time (11.230 seconds)
Sep 14 09:00:51 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:53 pve pvedaemon[2407833]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:54 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:00:55 pve pvestatd[3478]: status update time (11.208 seconds)
Sep 14 09:01:03 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:01:06 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:01:06 pve pvestatd[3478]: status update time (11.227 seconds)
Sep 14 09:01:14 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:01:17 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:01:17 pve pvestatd[3478]: status update time (11.222 seconds)
Sep 14 09:01:19 pve pvedaemon[2201620]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:01:25 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:01:27 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection refused)
Sep 14 09:01:27 pve pvestatd[3478]: status update time (10.366 seconds)
Sep 14 09:01:27 pve QEMU[3636]: HTTP/2.0 connection failed
Sep 14 09:01:27 pve pvedaemon[2396121]: ERROR: Backup of VM 101 failed - backup write data failed: command error: protocol canceled
Sep 14 09:01:27 pve pvedaemon[2396121]: INFO: Backup job finished with errors
Sep 14 09:01:27 pve pvedaemon[2396121]: job errors
Sep 14 09:01:27 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection refused)
Sep 14 09:01:31 pve pvedaemon[2326339]: <root@pam> successful auth for user 'root@pam'
Sep 14 09:02:43 pve smartd[2936]: Device: /dev/sda [SAT], CHECK POWER STATUS spins up disk (0x81 -> 0xff)

pve2: Virtual Environment 8.0.3
Code:
root@pve:~# journalctl --since 09:00:00 --until 10:00:00
Sep 14 09:00:46 pve pvedaemon[1806967]: <root@pam> starting task UPID:pve:001B95C5:003F12CD:6502AF9E:qmstart:100:root@pam:
Sep 14 09:00:46 pve pvedaemon[1807813]: start VM 100: UPID:pve:001B95C5:003F12CD:6502AF9E:qmstart:100:root@pam:
Sep 14 09:00:48 pve systemd[1]: Started 100.scope.
Sep 14 09:00:49 pve kernel: device tap100i0 entered promiscuous mode
Sep 14 09:00:49 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Sep 14 09:00:49 pve kernel: vmbr0: port 2(fwpr100p0) entered disabled state
Sep 14 09:00:49 pve kernel: device fwpr100p0 entered promiscuous mode
Sep 14 09:00:49 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Sep 14 09:00:49 pve kernel: vmbr0: port 2(fwpr100p0) entered forwarding state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 1(fwln100i0) entered disabled state
Sep 14 09:00:49 pve kernel: device fwln100i0 entered promiscuous mode
Sep 14 09:00:49 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 1(fwln100i0) entered forwarding state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 2(tap100i0) entered disabled state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 2(tap100i0) entered forwarding state
Sep 14 09:00:49 pve pvedaemon[1806967]: <root@pam> end task UPID:pve:001B95C5:003F12CD:6502AF9E:qmstart:100:root@pam: OK
Sep 14 09:01:08 pve pvedaemon[1679]: <root@pam> successful auth for user 'root@pam'
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 113 to 111
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sdb [SAT], 1 Offline uncorrectable sectors
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 113 to 112
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 162 to 157
Sep 14 09:02:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 82 to 83
Sep 14 09:02:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 68 to 67
Sep 14 09:02:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 32 to 33
Sep 14 09:08:48 pve pveproxy[1459592]: worker exit
Sep 14 09:08:48 pve pveproxy[1754]: worker 1459592 finished
Sep 14 09:08:48 pve pveproxy[1754]: starting 1 worker(s)
Sep 14 09:08:48 pve pveproxy[1754]: worker 1822496 started
Sep 14 09:11:16 pve pvedaemon[1679]: worker exit
Sep 14 09:11:16 pve pvedaemon[1678]: worker 1679 finished
Sep 14 09:11:16 pve pvedaemon[1678]: starting 1 worker(s)
Sep 14 09:11:16 pve pvedaemon[1678]: worker 1824449 started
Sep 14 09:15:26 pve pveproxy[1466996]: worker exit
Sep 14 09:15:26 pve pveproxy[1754]: worker 1466996 finished
Sep 14 09:15:26 pve pveproxy[1754]: starting 1 worker(s)
Sep 14 09:15:26 pve pveproxy[1754]: starting 1 worker(s)
Sep 14 09:15:26 pve pveproxy[1754]: starting 1 worker(s)
Sep 14 09:00:48 pve systemd[1]: Started 100.scope.
Sep 14 09:00:46 pve pvedaemon[1806967]: <root@pam> starting task UPID:pve:001B95C5:003F12CD:6502AF9E:qmstart:100:root@pam:
Sep 14 09:00:46 pve pvedaemon[1807813]: start VM 100: UPID:pve:001B95C5:003F12CD:6502AF9E:qmstart:100:root@pam:
Sep 14 09:00:48 pve systemd[1]: Started 100.scope.
Sep 14 09:00:49 pve kernel: device tap100i0 entered promiscuous mode
Sep 14 09:00:49 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Sep 14 09:00:49 pve kernel: vmbr0: port 2(fwpr100p0) entered disabled state
Sep 14 09:00:49 pve kernel: device fwpr100p0 entered promiscuous mode
Sep 14 09:00:49 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Sep 14 09:00:49 pve kernel: vmbr0: port 2(fwpr100p0) entered forwarding state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 1(fwln100i0) entered disabled state
Sep 14 09:00:49 pve kernel: device fwln100i0 entered promiscuous mode
Sep 14 09:00:49 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 1(fwln100i0) entered forwarding state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 2(tap100i0) entered disabled state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Sep 14 09:00:49 pve kernel: fwbr100i0: port 2(tap100i0) entered forwarding state
Sep 14 09:00:49 pve pvedaemon[1806967]: <root@pam> end task UPID:pve:001B95C5:003F12CD:6502AF9E:qmstart:100:root@pam: OK
Sep 14 09:01:08 pve pvedaemon[1679]: <root@pam> successful auth for user 'root@pam'
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 113 to 111
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sdb [SAT], 1 Offline uncorrectable sectors
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 113 to 112
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 162 to 157
Sep 14 09:02:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 82 to 83
Sep 14 09:02:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 68 to 67
Sep 14 09:02:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 32 to 33
Sep 14 09:08:48 pve pveproxy[1459592]: worker exit
Sep 14 09:08:48 pve pveproxy[1754]: worker 1459592 finished
Sep 14 09:08:48 pve pveproxy[1754]: starting 1 worker(s)
Sep 14 09:08:48 pve pveproxy[1754]: worker 1822496 started
Sep 14 09:11:16 pve pvedaemon[1679]: worker exit
Sep 14 09:11:16 pve pvedaemon[1678]: worker 1679 finished
Sep 14 09:11:16 pve pvedaemon[1678]: starting 1 worker(s)
Sep 14 09:11:16 pve pvedaemon[1678]: worker 1824449 started
Sep 14 09:15:26 pve pveproxy[1466996]: worker exit
Sep 14 09:15:26 pve pveproxy[1754]: worker 1466996 finished
Sep 14 09:15:26 pve pveproxy[1754]: starting 1 worker(s)
Sep 14 09:15:26 pve pveproxy[1754]: starting 1 worker(s)
Sep 14 09:15:26 pve pveproxy[1754]: worker 1827615 started
Sep 14 09:17:01 pve CRON[1828989]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Sep 14 09:17:01 pve CRON[1828990]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Sep 14 09:17:01 pve CRON[1828989]: pam_unix(cron:session): session closed for user root
Sep 14 09:17:08 pve pvedaemon[1680]: <root@pam> successful auth for user 'root@pam'
Sep 14 09:18:50 pve pvedaemon[1680]: worker exit
Sep 14 09:18:50 pve pvedaemon[1678]: worker 1680 finished
Sep 14 09:18:50 pve pvedaemon[1678]: starting 1 worker(s)
Sep 14 09:18:50 pve pvedaemon[1678]: worker 1830373 started
Sep 14 09:32:09 pve smartd[1052]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 111 to 113
Sep 14 09:32:09 pve smartd[1052]: Device: /dev/sdb [SAT], 1 Offline uncorrectable sectors
Sep 14 09:32:09 pve smartd[1052]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 112 to 114
Sep 14 09:32:09 pve smartd[1052]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 157 to 162
Sep 14 09:32:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 83 to 84
Sep 14 09:32:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 67 to 69
Sep 14 09:32:10 pve smartd[1052]: Device: /dev/sdd [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 33 to 31
Sep 14 09:33:08 pve pvedaemon[1824449]: <root@pam> successful auth for user 'root@pam'
Sep 14 09:34:27 pve pveproxy[1806303]: worker exit
Sep 14 09:34:27 pve pveproxy[1754]: worker 1806303 finished
Sep 14 09:34:27 pve pveproxy[1754]: starting 1 worker(s)
Sep 14 09:34:27 pve pveproxy[1754]: worker 1842525 started

Was mir auffällt:
Sep 14 09:02:09 pve smartd[1052]: Device: /dev/sdb [SAT], 1 Offline uncorrectable sectors

Ich glaube da sollte ich mal schauen ob die Platten alle so in Ordnung sind ;-)
Ich vermute mal, dass es das schon ist, oder?

Gruß,
Moritz
 
could you add logs before that? because according to the log, the problem started around 9:00 ;)

Code:
Sep 14 09:00:01 pve pvedaemon[2407833]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:07 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:00:10 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:00:10 pve pvestatd[3478]: status update time (11.237 seconds)

also doesn't look too good..
 
HI @fabian

attached you'll finde the log

Code:
root@pve:~# journalctl --since 08:45:00 --until 09:10:00
-- Journal begins at Sun 2023-02-05 08:01:37 CET, ends at Thu 2023-09-14 11:33:58 CEST. --
Sep 14 08:48:27 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection timed out)
Sep 14 08:48:27 pve pvestatd[3478]: status update time (7.168 seconds)
Sep 14 08:48:37 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection timed out)
Sep 14 08:48:37 pve pvestatd[3478]: status update time (7.139 seconds)
Sep 14 08:48:44 pve pvedaemon[2326339]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection timed out)
Sep 14 08:48:44 pve pvedaemon[2201620]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection timed out)
Sep 14 08:48:47 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection timed out)
Sep 14 08:48:47 pve pvestatd[3478]: status update time (7.133 seconds)
Sep 14 08:48:51 pve pvedaemon[2201620]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection timed out)
Sep 14 08:48:57 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection timed out)
Sep 14 08:48:57 pve pvestatd[3478]: status update time (7.112 seconds)
Sep 14 08:49:03 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:49:10 pve pvedaemon[2201620]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:49:10 pve pvedaemon[2407833]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:49:13 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:49:13 pve pvedaemon[2201620]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:49:19 pve pveproxy[2448623]: worker exit
Sep 14 08:49:19 pve pveproxy[3541]: worker 2448623 finished
Sep 14 08:49:19 pve pveproxy[3541]: starting 1 worker(s)
Sep 14 08:49:19 pve pveproxy[3541]: worker 3351242 started

############ 

Sep 14 08:51:43 pve pvedaemon[2326339]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:51:46 pve pvedaemon[2326339]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:51:58 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:52:01 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:52:02 pve pvestatd[3478]: status update time (11.259 seconds)
Sep 14 08:52:10 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:52:13 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:52:13 pve pvestatd[3478]: status update time (11.237 seconds)
Sep 14 08:52:13 pve pvedaemon[2407833]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:52:21 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:52:24 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:52:24 pve pvestatd[3478]: status update time (11.221 seconds)
Sep 14 08:52:32 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:52:35 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:52:35 pve pvestatd[3478]: status update time (11.223 seconds)
Sep 14 08:52:39 pve pvedaemon[2407833]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:52:43 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:52:46 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:52:46 pve pvestatd[3478]: status update time (11.232 seconds)
Sep 14 08:52:51 pve pveproxy[2493317]: worker exit
Sep 14 08:52:51 pve pveproxy[3541]: worker 2493317 finished
Sep 14 08:52:51 pve pveproxy[3541]: starting 1 worker(s)
Sep 14 08:52:51 pve pveproxy[3541]: worker 3361183 started
Sep 14 08:52:55 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:52:58 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:52:58 pve pvestatd[3478]: status update time (11.290 seconds)
Sep 14 08:53:05 pve pvedaemon[2326339]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:53:06 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:53:09 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:53:09 pve pvestatd[3478]: status update time (11.248 seconds)
Sep 14 08:53:17 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:53:18 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:53:18 pve pvestatd[3478]: status update time (8.893 seconds)
Sep 14 08:53:27 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:53:30 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:53:30 pve pvestatd[3478]: status update time (11.189 seconds)
Sep 14 08:53:31 pve pvedaemon[2407833]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:53:38 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
###
 
Code:
Sep 14 08:56:06 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:56:06 pve pvestatd[3478]: status update time (11.235 seconds)
Sep 14 08:56:07 pve pvedaemon[2326339]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:56:08 pve pvedaemon[2326339]: <root@pam> successful auth for user 'root@pam'
Sep 14 08:56:14 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:56:17 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:56:17 pve pvestatd[3478]: status update time (11.218 seconds)
Sep 14 08:56:25 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:56:28 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:56:28 pve pvestatd[3478]: status update time (11.174 seconds)
Sep 14 08:56:33 pve pvedaemon[2326339]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:56:33 pve kernel: pcieport 0000:00:1b.4: AER: Corrected error received: 0000:02:00.0
Sep 14 08:56:33 pve kernel: nvme 0000:02:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
Sep 14 08:56:33 pve kernel: nvme 0000:02:00.0:   device [15b7:5006] error status/mask=00000001/0000e000
Sep 14 08:56:33 pve kernel: nvme 0000:02:00.0:    [ 0] RxErr                 
Sep 14 08:56:36 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:56:39 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:56:39 pve pvestatd[3478]: status update time (11.202 seconds)
Sep 14 08:56:47 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:56:50 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:56:50 pve pvestatd[3478]: status update time (11.244 seconds)
Sep 14 08:56:59 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:56:59 pve pvedaemon[2201620]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:57:02 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:57:02 pve pvestatd[3478]: status update time (11.301 seconds)
Sep 14 08:57:10 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:57:13 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:57:13 pve pvestatd[3478]: status update time (11.259 seconds)
Sep 14 08:57:21 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:57:24 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:57:24 pve pvestatd[3478]: status update time (10.572 seconds)
Sep 14 08:57:25 pve pvedaemon[2326339]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:57:32 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:57:35 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:57:35 pve pvestatd[3478]: status update time (11.233 seconds)
Sep 14 08:57:43 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:57:46 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:57:46 pve pvestatd[3478]: status update time (11.209 seconds)

Sep 14 08:59:22 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:59:25 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:59:25 pve pvestatd[3478]: status update time (11.215 seconds)
Sep 14 08:59:33 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:59:35 pve pvedaemon[2326339]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:59:37 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:59:37 pve pvestatd[3478]: status update time (11.224 seconds)
Sep 14 08:59:45 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:59:45 pve pveproxy[2513158]: worker exit
Sep 14 08:59:45 pve pveproxy[3541]: worker 2513158 finished
Sep 14 08:59:45 pve pveproxy[3541]: starting 1 worker(s)
Sep 14 08:59:45 pve pveproxy[3541]: worker 3373623 started
Sep 14 08:59:48 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:59:48 pve pvestatd[3478]: status update time (11.233 seconds)
Sep 14 08:59:56 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 08:59:59 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 08:59:59 pve pvestatd[3478]: status update time (11.195 seconds)
Sep 14 09:00:01 pve pvedaemon[2407833]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries

Sep 14 09:01:06 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:01:06 pve pvestatd[3478]: status update time (11.227 seconds)
Sep 14 09:01:14 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:01:17 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (No route to host)
Sep 14 09:01:17 pve pvestatd[3478]: status update time (11.222 seconds)
Sep 14 09:01:19 pve pvedaemon[2201620]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:01:25 pve pvestatd[3478]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
Sep 14 09:01:27 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection refused)
Sep 14 09:01:27 pve pvestatd[3478]: status update time (10.366 seconds)
Sep 14 09:01:27 pve QEMU[3636]: HTTP/2.0 connection failed
Sep 14 09:01:27 pve pvedaemon[2396121]: ERROR: Backup of VM 101 failed - backup write data failed: command error: protocol canceled
Sep 14 09:01:27 pve pvedaemon[2396121]: INFO: Backup job finished with errors
Sep 14 09:01:27 pve pvedaemon[2396121]: job errors
Sep 14 09:01:27 pve pvestatd[3478]: PBS: error fetching datastores - 500 Can't connect to 192.168.178.128:8007 (Connection refused)
Sep 14 09:01:31 pve pvedaemon[2326339]: <root@pam> successful auth for user 'root@pam'
Sep 14 09:02:43 pve smartd[2936]: Device: /dev/sda [SAT], CHECK POWER STATUS spins up disk (0x81 -> 0xff)
Sep 14 09:02:43 pve smartd[2936]: Device: /dev/sda [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 83 to 77
Sep 14 09:02:43 pve smartd[2936]: Device: /dev/sdb [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 82 to 65
Sep 14 09:02:43 pve smartd[2936]: Device: /dev/sdb [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 52 to 53
Sep 14 09:02:43 pve smartd[2936]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 48 to 47
Sep 14 09:05:45 pve pvedaemon[2201620]: worker exit
Sep 14 09:05:45 pve pvedaemon[3503]: worker 2201620 finished
Sep 14 09:05:45 pve pvedaemon[3503]: starting 1 worker(s)
Sep 14 09:05:45 pve pvedaemon[3503]: worker 3395579 started
 
the logs seem to indicate something is badly overloaded - the link to PBS, the storage on PBS? anything visible in metrics/monitoring if you have that in place?
 
Hallo,

ich habe gerade eben versucht per iLO noch einmal versucht PVE neu zu installieren um dann PBS neu zu installieren.
Ergebnis während der Installation ist folgendes:

1694695992831.png

Kann es sein, dass hier was mit meinen Platten nicht stimmt?
 
durchaus moeglich.. vielleicht mal die smart werte genauer inspizieren und einen test anstossen? im installer gibts auch einen debug mode, da laesst sich unter umstaenden noch mehr ueber den fehler rausfinden..
 
Die Festplatte/n müssen auch wirklich leer sein. Ohne eine $ cfdisk -z /dev/disk/by-id/<name> läuft das bei mir nicht.
Ok meine Systeme haben, bis auf 2 Server, alle ZFS als Dateisystem.
 
Hallo,

so richtig verstehen tue ich es nicht.
Ich habe gestern nochmal die pve2 und die pbs Umgebung neu aufgebaut.
Ein Backup einer VM mit 100GB ist fehlerfrei durchgelaufen.

Code:
cfdisk -z /dev/disk/by-id/dev/sda
führt bei mir zu
Code:
cfdisk: cannot open /dev/disk/by-id/dev/sda3: No such file or directory

Bei der größeren VM mit ca. 6TB kommt es wieder zu den o.g. Fehlermeldungen, die SMART Werte sehen jedoch alle gut aus, habe über Nacht einen Test durchlaufen lassen.

Ich habe PVE nun noch nicht so wahnsinnig oft installiert, aber ist da irgendwas besonderes zu beachten?
Ich habe vier Festplatten im System, 2x 3TB und 2x 4TB, was wäre da die sinnvollste Vorgehensweise?

Danke und Gruß,
MOritz
 
Ok, ich sehe, Du kennst diese Darstellung nicht: /dev/disk/by-id/dev/

Gib bitte mal in der root-shell ein:
ls -la /dev/disk/by-id/
Dann sieht man eineindeutige den Namen der Festplatte, also einer HDD, NVMe oder SSD, die Du neu beschreiben möchtest.

Diese vorgehen soll Fehler vorbeugen, dass man nicht die falsche Festplatte löscht!

Bei mir erscheinen z.B. die SSD mit vollständigem Namen und ihrer Seriennummer.
 
Hi news,

ist gerade "schwierig", da ich den PVE nun erstmal neu aufsetzen möchte.
Ich stehe bei dem Punkt der Zuordnung der HDD's, quasi hier:
https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png

Ich würde PVE nun erstmal per ZFS (RAID1) auf den beiden 3TB HDD's versuchen zu installieren.
Sprich bei der Installation wie in dem o.g. Bild setzte ich dort meine 2x 3TB HDD's ein.
Die verbleibenden 2x 4TB HDD's verwende ich dann als Storage.

So korrekt?
 
Eigentlich ja, aber ich würde eine SSD an den noch freien fünften SATA-Port (ggf. eSATA) hängen.
 
Wenn ZFS nur auf HDDs, dann bitte für jeden Pool auch einen ZFS Special Device aus SSD mit der selben Sicherheitsstufe.
Das kann man durch ein mischen der Device auf einen externen HDD SATA II PCIe Kontroller erreichen.
Auf solch einem Adapter sollten nicht die selbe Art von SDDs oder HDDs angeschlossen sein, um die Auslastung besser zu verteilen.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!