[SOLVED] Backup failed: blob too small (0 bytes)

devit.systems

New Member
Nov 30, 2021
12
2
3
37
Hallo zusammen,

wir evaluieren gerade die PBS Lösung bei uns in der Firma. Bis jetzt sah es sehr gut aus. jetzt haben wir jedoch ein Problem.

PVE: 6.4-4
PBS: 2.1-2

Eine VM bei uns hat 2TB an Daten. Das Backup zu der lokalen PBS Instanz hat problemlos funktioniert. Der Sync zu der PBS Instanz bei Hetzner hat jetzt jedoch folgenden Fehler:

Code:
2021-11-30T09:26:15+01:00: sync group vm/202 failed - blob too small (0 bytes).

Bis jetzt haben wir folgendes probiert:
* Verify Jobs / Prune Jobs
* GC mehrmals laufen lassen und 24h gewartet
* Das Backup komplett gelöscht, GC und neu angelegt

Leider bleibt der Fehler bestehen. Was können wir noch tun? Woran könnte es liegen?

Kompletter Log:

Code:
()
2021-11-30T09:26:12+01:00: Starting datastore sync job 'pieles:backup1:backup_pieles:s-a73b3cc3-985c'
2021-11-30T09:26:12+01:00: sync datastore 'backup_pieles' from 'pieles/backup1'
2021-11-30T09:26:12+01:00: found 5 groups to sync
2021-11-30T09:26:12+01:00: re-sync snapshot "ct/301/2021-11-28T03:00:06Z"
2021-11-30T09:26:12+01:00: no data changes
2021-11-30T09:26:12+01:00: re-sync snapshot "ct/301/2021-11-28T03:00:06Z" done
2021-11-30T09:26:12+01:00: percentage done: 20.00% (1/5 groups)
2021-11-30T09:26:12+01:00: skipped: 24 snapshot(s) (2021-11-03T10:19:23Z .. 2021-11-27T03:00:05Z) older than the newest local snapshot
2021-11-30T09:26:12+01:00: re-sync snapshot "vm/201/2021-11-29T15:39:53Z"
2021-11-30T09:26:12+01:00: no data changes
2021-11-30T09:26:12+01:00: re-sync snapshot "vm/201/2021-11-29T15:39:53Z" done
2021-11-30T09:26:12+01:00: percentage done: 39.29% (1/5 groups, 27/28 snapshots in group #2)
2021-11-30T09:26:12+01:00: sync snapshot "vm/201/2021-11-30T03:00:01Z"
2021-11-30T09:26:12+01:00: sync archive qemu-server.conf.blob
2021-11-30T09:26:12+01:00: sync archive drive-scsi0.img.fidx
2021-11-30T09:26:12+01:00: downloaded 4065541 bytes (20.15 MiB/s)
2021-11-30T09:26:12+01:00: got backup log file "client.log.blob"
2021-11-30T09:26:12+01:00: sync snapshot "vm/201/2021-11-30T03:00:01Z" done
2021-11-30T09:26:12+01:00: percentage done: 40.00% (2/5 groups)
2021-11-30T09:26:12+01:00: skipped: 26 snapshot(s) (2021-11-03T10:14:19Z .. 2021-11-29T03:00:01Z) older than the newest local snapshot
2021-11-30T09:26:12+01:00: sync snapshot "vm/202/2021-11-29T15:39:53Z"
2021-11-30T09:26:12+01:00: sync archive qemu-server.conf.blob
2021-11-30T09:26:12+01:00: sync archive drive-scsi1.img.fidx
2021-11-30T09:26:15+01:00: percentage done: 50.00% (2/5 groups, 1/2 snapshots in group #3)
2021-11-30T09:26:15+01:00: sync group vm/202 failed - blob too small (0 bytes).
2021-11-30T09:26:15+01:00: re-sync snapshot "vm/203/2021-11-29T23:43:42Z"
2021-11-30T09:26:15+01:00: no data changes
2021-11-30T09:26:15+01:00: re-sync snapshot "vm/203/2021-11-29T23:43:42Z" done
2021-11-30T09:26:15+01:00: percentage done: 79.17% (3/5 groups, 23/24 snapshots in group #4)
2021-11-30T09:26:15+01:00: sync snapshot "vm/203/2021-11-30T03:00:06Z"
2021-11-30T09:26:15+01:00: sync archive qemu-server.conf.blob
2021-11-30T09:26:15+01:00: sync archive drive-sata0.img.fidx
2021-11-30T09:26:21+01:00: downloaded 119600781 bytes (20.03 MiB/s)
2021-11-30T09:26:21+01:00: sync archive drive-efidisk0.img.fidx
2021-11-30T09:26:21+01:00: downloaded 0 bytes (0.00 MiB/s)
2021-11-30T09:26:21+01:00: got backup log file "client.log.blob"
2021-11-30T09:26:21+01:00: sync snapshot "vm/203/2021-11-30T03:00:06Z" done
2021-11-30T09:26:21+01:00: percentage done: 80.00% (4/5 groups)
2021-11-30T09:26:21+01:00: skipped: 22 snapshot(s) (2021-11-04T15:44:59Z .. 2021-11-29T03:00:19Z) older than the newest local snapshot
2021-11-30T09:26:21+01:00: re-sync snapshot "vm/301/2021-11-29T15:39:56Z"
2021-11-30T09:26:21+01:00: no data changes
2021-11-30T09:26:21+01:00: re-sync snapshot "vm/301/2021-11-29T15:39:56Z" done
2021-11-30T09:26:21+01:00: percentage done: 90.00% (4/5 groups, 1/2 snapshots in group #5)
2021-11-30T09:26:21+01:00: sync snapshot "vm/301/2021-11-30T03:00:04Z"
2021-11-30T09:26:21+01:00: sync archive qemu-server.conf.blob
2021-11-30T09:26:21+01:00: sync archive drive-scsi0.img.fidx
2021-11-30T09:26:22+01:00: downloaded 24388288 bytes (22.64 MiB/s)
2021-11-30T09:26:22+01:00: got backup log file "client.log.blob"
2021-11-30T09:26:22+01:00: sync snapshot "vm/301/2021-11-30T03:00:04Z" done
2021-11-30T09:26:22+01:00: percentage done: 100.00% (5/5 groups)
2021-11-30T09:26:23+01:00: TASK ERROR: sync failed with some errors.
 
es muessten sich entsprechende eintraege auf der anderen PBS seite finden (read task und/oder im access log).
 
Ah! Ja da steht dann wiederum, dass er manche Dateien nicht findet.

Zum Aufbau. Die PBS läuft auf einem Proxmox Cluster mit Ceph Storage. Da wäre jetzt meine Frage, warum findet er manche Dateien nicht?

Code:
021-11-30T11:30:02+01:00: starting new backup reader datastore 'backup1': "/mnt/datastore/backup1"
2021-11-30T11:30:02+01:00: protocol upgrade done
2021-11-30T11:30:02+01:00: GET /download
2021-11-30T11:30:02+01:00: download "/mnt/datastore/backup1/vm/202/2021-11-29T15:39:53Z/index.json.blob"
2021-11-30T11:30:02+01:00: GET /download
2021-11-30T11:30:02+01:00: download "/mnt/datastore/backup1/vm/202/2021-11-29T15:39:53Z/qemu-server.conf.blob"
2021-11-30T11:30:02+01:00: GET /download
2021-11-30T11:30:02+01:00: download "/mnt/datastore/backup1/vm/202/2021-11-29T15:39:53Z/drive-scsi1.img.fidx"
2021-11-30T11:30:02+01:00: register chunks in 'drive-scsi1.img.fidx' as downloadable.
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: download chunk "/mnt/datastore/backup1/.chunks/8a84/8a846371363e2a9f9c57b82b2dfa386df4acfd6ebd189270cb39305bb860ef21"
2021-11-30T11:30:08+01:00: download chunk "/mnt/datastore/backup1/.chunks/36ad/36ad9db300512b079dfbabda4703dfac53a72405c3f206f80955621dcd9388c1"
2021-11-30T11:30:08+01:00: download chunk "/mnt/datastore/backup1/.chunks/8f02/8f0203f1c53f74458773e547ac573d34000274558c4c6dae4af875f811b5e50c"
2021-11-30T11:30:08+01:00: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/36ad/36ad9db300512b079dfbabda4703dfac53a72405c3f206f80955621dcd9388c1" failed: No such file or directory (os error 2)
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: GET /chunk
2021-11-30T11:30:08+01:00: download chunk "/mnt/datastore/backup1/.chunks/f6a5/f6a55e272100cd01b2e1ecb643afdce448d7a4efb78b99add9228bd92fd9b1a7"
2021-11-30T11:30:08+01:00: download chunk "/mnt/datastore/backup1/.chunks/2518/2518b8a7968ece87e82552ba67f9b0310a460e0ab68579bdf5f7e928ddcc5b82"
2021-11-30T11:30:08+01:00: download chunk "/mnt/datastore/backup1/.chunks/7cdf/7cdf4589527bdb634fb6b32dc8e1474dd38a5707cb67d1b9a360a14af7b3dcf2"
2021-11-30T11:30:09+01:00: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/2518/2518b8a7968ece87e82552ba67f9b0310a460e0ab68579bdf5f7e928ddcc5b82" failed: No such file or directory (os error 2)
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/6cc0/6cc0ebbe1b5aaaedcaa8fe3ba1d460755db6a100b640a7f9339efe31868442fb"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/2c88/2c88f85ffb2a2ae01d4924351451ffce66940e9c4a222d9d59f47e359fdb0f90"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/756f/756f40588e1a040384d049804c5f32dd5eff6129bd8dbb3ff7b605898b124828"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/ca6c/ca6caf5e670828738b47fea367f1d6775f9e38e6175941be5e9cbe2bd3897e32"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/0738/07384ad3860a0cde06d2e9aedc06a61f9c2698d50af32aed0fd4fea25184dca3"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/958a/958a012be3ab23e8807b6b38d9a8977c93c0b2eea9c108621276b422f04629d9"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/23a9/23a92c176876941a2b873f55ddf4b3ac8c6c51d31067767346c597e2d8141b43"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/58ea/58ea1f64527573146397ce82b56a2cfbdc3b606c97de93cd1c24aee8da096224"
2021-11-30T11:30:09+01:00: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/0738/07384ad3860a0cde06d2e9aedc06a61f9c2698d50af32aed0fd4fea25184dca3" failed: No such file or directory (os error 2)
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/6e5f/6e5fdba14f1acfe0e2b9f80210deedf4c4f35d1b11f52cf76279765b45fe4f7c"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/c529/c529b2de8e018f9fd1dad85e24e5a16bb6da64d0555c0a2490d9133e216243ca"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/ee16/ee16be1a0b79fd47abc5b8953e43305b0db15fafcc88d3c71b6e63a318b614af"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/66df/66df17e5f8d22de261a0077d5cfe743e8156972c0dba0b215cb743e148abcc93"
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/495e/495ed4f316e44bfe850b4552b60ce90225f54fa35485f407ddcdc82595369cdd"
2021-11-30T11:30:09+01:00: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/2c88/2c88f85ffb2a2ae01d4924351451ffce66940e9c4a222d9d59f47e359fdb0f90" failed: No such file or directory (os error 2)
2021-11-30T11:30:09+01:00: download chunk "/mnt/datastore/backup1/.chunks/9fff/9fffa41cb42846409237495fe3bc2e0393c5d98988db4aa6bc0217c6ada16fe7"
2021-11-30T11:30:09+01:00: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/23a9/23a92c176876941a2b873f55ddf4b3ac8c6c51d31067767346c597e2d8141b43" failed: No such file or directory (os error 2)
2021-11-30T11:30:09+01:00: TASK ERROR: connection error: Broken pipe (os error 32)
 
Last edited:
Ich lasse gerade den "Verify Task" auf der lokalen Instanz laufen. Der hat die defekten Blöcke auch gefunden.


Code:
2021-11-30T09:28:37+01:00: verify backup1:vm/202/2021-11-30T03:00:02Z
2021-11-30T09:28:37+01:00:   check qemu-server.conf.blob
2021-11-30T09:28:37+01:00:   check drive-scsi1.img.fidx
2021-11-30T09:48:41+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '07384ad3860a0cde06d2e9aedc06a61f9c2698d50af32aed0fd4fea25184dca3' - blob too small (0 bytes).
2021-11-30T09:48:42+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/0738/07384ad3860a0cde06d2e9aedc06a61f9c2698d50af32aed0fd4fea25184dca3.0.bad"
2021-11-30T10:31:05+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '1ba110f35012b6e365ae6abfdeb45731764e9574dad9077176a0525ac9443a00' - blob too small (0 bytes).
2021-11-30T10:31:05+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/1ba1/1ba110f35012b6e365ae6abfdeb45731764e9574dad9077176a0525ac9443a00.0.bad"
2021-11-30T10:46:45+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '23a92c176876941a2b873f55ddf4b3ac8c6c51d31067767346c597e2d8141b43' - blob too small (0 bytes).
2021-11-30T10:46:45+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/23a9/23a92c176876941a2b873f55ddf4b3ac8c6c51d31067767346c597e2d8141b43.0.bad"
2021-11-30T10:48:37+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '2518b8a7968ece87e82552ba67f9b0310a460e0ab68579bdf5f7e928ddcc5b82' - blob too small (0 bytes).
2021-11-30T10:48:37+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/2518/2518b8a7968ece87e82552ba67f9b0310a460e0ab68579bdf5f7e928ddcc5b82.0.bad"
2021-11-30T10:58:24+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '29c50f050d018d2756a241c71de1002e5241f03966ba8172e4560e3b65845dd0' - blob too small (0 bytes).
2021-11-30T10:58:24+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/29c5/29c50f050d018d2756a241c71de1002e5241f03966ba8172e4560e3b65845dd0.0.bad"
2021-11-30T11:04:12+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '2c88f85ffb2a2ae01d4924351451ffce66940e9c4a222d9d59f47e359fdb0f90' - blob too small (0 bytes).
2021-11-30T11:04:12+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/2c88/2c88f85ffb2a2ae01d4924351451ffce66940e9c4a222d9d59f47e359fdb0f90.0.bad"
2021-11-30T11:06:05+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '2db525cc760d1e938efa284a0b8abdf5259426eee8dbaacc8a07e30730defbd8' - blob too small (0 bytes).
2021-11-30T11:06:05+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/2db5/2db525cc760d1e938efa284a0b8abdf5259426eee8dbaacc8a07e30730defbd8.0.bad"
2021-11-30T11:19:33+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '34835cac03a885b245d516929d3a7546448fbedfeaa199777bb883a3c27c0f82' - blob too small (0 bytes).
2021-11-30T11:19:33+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/3483/34835cac03a885b245d516929d3a7546448fbedfeaa199777bb883a3c27c0f82.0.bad"
2021-11-30T11:23:35+01:00: can't verify chunk, load failed - store 'backup1', unable to load chunk '36ad9db300512b079dfbabda4703dfac53a72405c3f206f80955621dcd9388c1' - blob too small (0 bytes).
2021-11-30T11:23:35+01:00: corrupted chunk renamed to "/mnt/datastore/backup1/.chunks/36ad/36ad9db300512b079dfbabda4703dfac53a72405c3f206f80955621dcd9388c1.0.bad"
 
Last edited:
auf ceph heisst in einer VM mit einem RBD volume (mit welchem dateissystem in der VM?) oder auf einem cephfs auf das PBS zugreift?
 
das ist seltsam - irgendwann mal das dateisystem repariert nach einem crash oder aehnlichem? beim schreiben wird der chunk erst in ein tmp file geschrieben und dann umbenannt (damit unter dem finalen namen immer nur ein vollstaendiger chunk sein kann), und ein chunk hat immer einen header, also das PBS selbst ein 0-byte schreibt ist eher unwahrscheinlich:

https://git.proxmox.com/?p=proxmox-...564f7e029dc48304b3aa1293a0f815c9;hb=HEAD#l379
 
Das Dateisystem sollte ok sein. Gerade nochmal überprüft. Wir hatten auch noch keinen Crash. Das ganze ist ein noch recht junges Projekt (2 Monate).
Code:
root@pbs:~# fsck -f -n /dev/sda1
fsck from util-linux 2.36.1
e2fsck 1.46.2 (28-Feb-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sda1: 419550/268435456 files (0.2% non-contiguous), 351207851/2147483387 blocks

Code:
root@pbs:~# fdisk -l
Disk /dev/sda: 8 TiB, 8796093022208 bytes, 17179869184 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8DB18FFA-C132-465E-84A3-059D4737F15D

Device     Start         End     Sectors Size Type
/dev/sda1   2048 17179869150 17179867103   8T Linux filesystem


Disk /dev/sdb: 32 GiB, 34359738368 bytes, 67108864 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 11A3819B-8D29-4FDD-9854-4790136C8DFF

Device       Start      End  Sectors  Size Type
/dev/sdb1       34     2047     2014 1007K BIOS boot
/dev/sdb2     2048  1050623  1048576  512M EFI System
/dev/sdb3  1050624 67108830 66058207 31.5G Linux LVM


Disk /dev/mapper/pbs-swap: 3.88 GiB, 4160749568 bytes, 8126464 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pbs-root: 23.75 GiB, 25501368320 bytes, 49807360 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
Last edited:
Ohje. Der verify job lässt jetzt das komplette system abstürzen. Jedoch nur, wenn ich die 2TB VM überprüfen lasse. Alle andere Jobs laufen problemlos. Ich habe den Snapshot erneut komplett gelöscht und erstelle einen neuen der anschließend direkt ein verify laufen lässt. Ich bin gespannt.
 
Last edited:
irgendein hinweis beim abstuerzen? im gast/host log?
 
Leider nicht wirklich. Ich sehe zumindest nichts. Evtl. übersehe ich was. Bei 11:30:10 ist kurz pause und dann fährt die Kiste wieder hoch. Dazwischen leider Stille. Das ganze habe ich 3 mal ausprobiert. Ergebnis bleibt gleich. Gibt es evtl. irgendwo noch andere Logs?

Auszug /var/log/syslog:
Code:
Nov 30 09:49:38 pbs proxmox-backup-proxy[668]: write rrd data back to disk
Nov 30 09:49:38 pbs proxmox-backup-proxy[668]: starting rrd data sync
Nov 30 09:49:38 pbs proxmox-backup-proxy[668]: rrd journal successfully committed (23 files in 0.609 seconds)
Nov 30 10:17:02 pbs CRON[7319]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Nov 30 10:19:38 pbs proxmox-backup-proxy[668]: write rrd data back to disk
Nov 30 10:19:39 pbs proxmox-backup-proxy[668]: starting rrd data sync
Nov 30 10:19:39 pbs proxmox-backup-proxy[668]: rrd journal successfully committed (23 files in 0.798 seconds)
Nov 30 10:49:38 pbs proxmox-backup-proxy[668]: write rrd data back to disk
Nov 30 10:49:39 pbs proxmox-backup-proxy[668]: starting rrd data sync
Nov 30 10:49:39 pbs proxmox-backup-proxy[668]: rrd journal successfully committed (23 files in 0.474 seconds)
Nov 30 11:17:02 pbs CRON[7500]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Nov 30 11:19:38 pbs proxmox-backup-proxy[668]: write rrd data back to disk
Nov 30 11:19:39 pbs proxmox-backup-proxy[668]: starting rrd data sync
Nov 30 11:19:39 pbs proxmox-backup-proxy[668]: rrd journal successfully committed (23 files in 0.594 seconds)
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: starting new backup reader datastore 'backup1': "/mnt/datastore/backup1"
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: protocol upgrade done
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: GET /download
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: download "/mnt/datastore/backup1/ct/301/2021-11-28T03:00:06Z/index.json.blob"
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: reader finished successfully
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: TASK OK
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: starting new backup reader datastore 'backup1': "/mnt/datastore/backup1"
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: protocol upgrade done
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: GET /download
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: download "/mnt/datastore/backup1/vm/201/2021-11-30T03:00:01Z/index.json.blob"
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: reader finished successfully
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: TASK OK
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: starting new backup reader datastore 'backup1': "/mnt/datastore/backup1"
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: protocol upgrade done
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: GET /download
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: download "/mnt/datastore/backup1/vm/202/2021-11-29T15:39:53Z/index.json.blob"
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: GET /download
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: download "/mnt/datastore/backup1/vm/202/2021-11-29T15:39:53Z/qemu-server.conf.blob"
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: GET /download
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: download "/mnt/datastore/backup1/vm/202/2021-11-29T15:39:53Z/drive-scsi1.img.fidx"
Nov 30 11:30:02 pbs proxmox-backup-proxy[668]: register chunks in 'drive-scsi1.img.fidx' as downloadable.
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/8a84/8a846371363e2a9f9c57b82b2dfa386df4acfd6ebd189270cb39305bb860ef21"
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/36ad/36ad9db300512b079dfbabda4703dfac53a72405c3f206f80955621dcd9388c1"
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/8f02/8f0203f1c53f74458773e547ac573d34000274558c4c6dae4af875f811b5e50c"
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/36ad/36ad9db300512b079dfbabda4703dfac53a72405c3f206f80955621dcd9388c1" failed: No such file or directory (os error 2)
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: GET /chunk
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/f6a5/f6a55e272100cd01b2e1ecb643afdce448d7a4efb78b99add9228bd92fd9b1a7"
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/2518/2518b8a7968ece87e82552ba67f9b0310a460e0ab68579bdf5f7e928ddcc5b82"
Nov 30 11:30:08 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/7cdf/7cdf4589527bdb634fb6b32dc8e1474dd38a5707cb67d1b9a360a14af7b3dcf2"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/2518/2518b8a7968ece87e82552ba67f9b0310a460e0ab68579bdf5f7e928ddcc5b82" failed: No such file or directory (os error 2)
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/6cc0/6cc0ebbe1b5aaaedcaa8fe3ba1d460755db6a100b640a7f9339efe31868442fb"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/2c88/2c88f85ffb2a2ae01d4924351451ffce66940e9c4a222d9d59f47e359fdb0f90"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/756f/756f40588e1a040384d049804c5f32dd5eff6129bd8dbb3ff7b605898b124828"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/ca6c/ca6caf5e670828738b47fea367f1d6775f9e38e6175941be5e9cbe2bd3897e32"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/0738/07384ad3860a0cde06d2e9aedc06a61f9c2698d50af32aed0fd4fea25184dca3"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/958a/958a012be3ab23e8807b6b38d9a8977c93c0b2eea9c108621276b422f04629d9"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/23a9/23a92c176876941a2b873f55ddf4b3ac8c6c51d31067767346c597e2d8141b43"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/58ea/58ea1f64527573146397ce82b56a2cfbdc3b606c97de93cd1c24aee8da096224"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/0738/07384ad3860a0cde06d2e9aedc06a61f9c2698d50af32aed0fd4fea25184dca3" failed: No such file or directory (os error 2)
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/6e5f/6e5fdba14f1acfe0e2b9f80210deedf4c4f35d1b11f52cf76279765b45fe4f7c"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/c529/c529b2de8e018f9fd1dad85e24e5a16bb6da64d0555c0a2490d9133e216243ca"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/ee16/ee16be1a0b79fd47abc5b8953e43305b0db15fafcc88d3c71b6e63a318b614af"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/66df/66df17e5f8d22de261a0077d5cfe743e8156972c0dba0b215cb743e148abcc93"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/495e/495ed4f316e44bfe850b4552b60ce90225f54fa35485f407ddcdc82595369cdd"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/2c88/2c88f85ffb2a2ae01d4924351451ffce66940e9c4a222d9d59f47e359fdb0f90" failed: No such file or directory (os error 2)
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download chunk "/mnt/datastore/backup1/.chunks/9fff/9fffa41cb42846409237495fe3bc2e0393c5d98988db4aa6bc0217c6ada16fe7"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: GET /chunk: 400 Bad Request: reading file "/mnt/datastore/backup1/.chunks/23a9/23a92c176876941a2b873f55ddf4b3ac8c6c51d31067767346c597e2d8141b43" failed: No such file or directory (os error 2)
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: TASK ERROR: connection error: Broken pipe (os error 32)
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: starting new backup reader datastore 'backup1': "/mnt/datastore/backup1"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: protocol upgrade done
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: GET /download
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: download "/mnt/datastore/backup1/vm/203/2021-11-30T03:00:06Z/index.json.blob"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: reader finished successfully
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: TASK OK
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: starting new backup reader datastore 'backup1': "/mnt/datastore/backup1"
Nov 30 11:30:09 pbs proxmox-backup-proxy[668]: protocol upgrade done
Nov 30 11:30:10 pbs proxmox-backup-proxy[668]: GET /download
Nov 30 11:30:10 pbs proxmox-backup-proxy[668]: download "/mnt/datastore/backup1/vm/301/2021-11-30T03:00:04Z/index.json.blob"
Nov 30 11:30:10 pbs proxmox-backup-proxy[668]: reader finished successfully
Nov 30 11:30:10 pbs proxmox-backup-proxy[668]: TASK OK
Nov 30 11:32:15 pbs systemd[1]: Starting Cleanup of Temporary Directories...
Nov 30 11:32:16 pbs systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Nov 30 11:32:16 pbs systemd[1]: Finished Cleanup of Temporary Directories.
Nov 30 11:40:27 pbs systemd[1]: Created slice User Slice of UID 0.
Nov 30 11:40:27 pbs systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 30 11:40:27 pbs systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 30 11:40:27 pbs systemd[1]: Starting User Manager for UID 0...
Nov 30 11:40:28 pbs systemd[7607]: gpgconf: error running '/usr/lib/gnupg/scdaemon': probably not installed
Nov 30 11:40:28 pbs systemd[7602]: Queued start job for default target Main User Target.
Nov 30 11:40:28 pbs systemd[7602]: Created slice User Application Slice.
Nov 30 11:40:28 pbs systemd[7602]: Reached target Paths.
Nov 30 11:40:28 pbs systemd[7602]: Reached target Timers.
Nov 30 11:40:28 pbs systemd[7602]: Listening on GnuPG network certificate management daemon.
Nov 30 11:40:28 pbs systemd[7602]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Nov 30 11:40:28 pbs systemd[7602]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Nov 30 11:40:28 pbs systemd[7602]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Nov 30 11:40:28 pbs systemd[7602]: Listening on GnuPG cryptographic agent and passphrase cache.
Nov 30 11:40:28 pbs systemd[7602]: Reached target Sockets.
Nov 30 11:40:28 pbs systemd[7602]: Reached target Basic System.
 
Last edited:
schaut etwas unvollstaendig aus der log. steht irgendwas im host journal? wenn eine VM sich beendet muss ja entweder ein shutdown/reboot sichtbar sein (qmeventd log meldungen), oder der prozess stuerzt ab (sollte vom prozess gelogged werden), oder wird gekillt (OOM z.b., das sollte dann vom kernel ins journal geschrieben werden). wenn alle stricke reissen serielle konsole in der VM konfigurieren und dranhaengen ;)
 
Auch mit journalctl habe ich nichts gefunden, was auf einen Fehler hindeutet. Ich habe gestern das Backup nochmal neu angelegt. Es ist ohne Probleme durchgelaufen. Jetzt mache ich noch ein verify bevor es zu Hetzner geht. Melde mich dann nochmal. Vielen Dank für die schnelle Unterstützung bis hierhin. Das schafft Vertrauen :).
 
  • Like
Reactions: fabian
Stürzt schon wieder ab. Immerhin reproduzierbar.

Hier die Einträge von journalctl -b -1. Sollte das nicht reichen gerne mir ein command zukommen lassen. Wüsste nicht wo ich sonst noch schauen kann.

Code:
Dec 01 07:17:01 pbs CRON[1946]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 07:17:01 pbs CRON[1947]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 07:17:01 pbs CRON[1946]: pam_unix(cron:session): session closed for user root
Dec 01 07:33:39 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 07:33:39 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 07:33:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.562 seconds)
Dec 01 08:03:39 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 08:03:39 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 08:03:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.286 seconds)
Dec 01 08:17:01 pbs CRON[2243]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 08:17:01 pbs CRON[2244]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 08:17:01 pbs CRON[2243]: pam_unix(cron:session): session closed for user root
Dec 01 08:33:39 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 08:33:40 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 08:33:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.366 seconds)
Dec 01 09:03:40 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 09:03:40 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 09:03:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.267 seconds)
Dec 01 09:17:01 pbs CRON[2543]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 09:17:01 pbs CRON[2544]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 09:17:01 pbs CRON[2543]: pam_unix(cron:session): session closed for user root
Dec 01 09:33:40 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 09:33:40 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 09:33:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.170 seconds)
d:801b4b0e88c2d09c D4 100.64.66.42 fd7a:115c:a1e0:ab12:4843:cd96:6240:422a/128 :   94.31.102.204:41641    10.10.59.170:41641>
Dec 01 10:03:40 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 10:03:40 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 10:03:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.185 seconds)
Dec 01 10:17:01 pbs CRON[2834]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 10:17:01 pbs CRON[2835]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 10:17:01 pbs CRON[2834]: pam_unix(cron:session): session closed for user root
Dec 01 10:24:14 pbs systemd[1]: Starting Daily apt download activities...
Dec 01 10:24:14 pbs systemd[1]: apt-daily.service: Succeeded.
Dec 01 05:03:38 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 05:03:38 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 05:03:38 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.224 seconds)
Dec 01 05:17:01 pbs CRON[1283]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 05:17:01 pbs CRON[1284]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 05:17:01 pbs CRON[1283]: pam_unix(cron:session): session closed for user root
Dec 01 05:33:38 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 05:33:39 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 05:33:39 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.354 seconds)
Dec 01 06:02:12 pbs systemd[1]: Starting Daily apt upgrade and clean activities...
Dec 01 06:02:13 pbs systemd[1]: apt-daily-upgrade.service: Succeeded.
Dec 01 06:02:13 pbs systemd[1]: Finished Daily apt upgrade and clean activities.
Dec 01 06:03:38 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 06:03:38 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 06:03:39 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.394 seconds)
Dec 01 06:17:01 pbs CRON[1627]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 06:17:01 pbs CRON[1628]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 06:17:01 pbs CRON[1627]: pam_unix(cron:session): session closed for user root
Dec 01 06:25:01 pbs CRON[1671]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 06:25:01 pbs CRON[1672]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Dec 01 06:25:01 pbs CRON[1671]: pam_unix(cron:session): session closed for user root
Dec 01 06:33:39 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 06:33:39 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 06:33:39 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.312 seconds)
Dec 01 06:52:01 pbs CRON[1814]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 06:52:01 pbs CRON[1815]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ))
Dec 01 06:52:01 pbs CRON[1814]: pam_unix(cron:session): session closed for user root
Dec 01 07:03:39 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 07:03:39 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 07:03:39 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.202 seconds)
Dec 01 07:17:01 pbs CRON[1946]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 07:17:01 pbs CRON[1947]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 07:17:01 pbs CRON[1946]: pam_unix(cron:session): session closed for user root
Dec 01 07:33:39 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 07:33:39 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 07:33:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.562 seconds)
Dec 01 08:03:39 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 08:03:39 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 08:03:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.286 seconds)
Dec 01 08:17:01 pbs CRON[2243]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 08:17:01 pbs CRON[2244]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 08:17:01 pbs CRON[2243]: pam_unix(cron:session): session closed for user root
Dec 01 08:33:39 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 08:33:40 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 08:33:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.366 seconds)
Dec 01 09:03:40 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 09:03:40 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 09:03:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.267 seconds)
Dec 01 09:17:01 pbs CRON[2543]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 09:17:01 pbs CRON[2544]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 09:17:01 pbs CRON[2543]: pam_unix(cron:session): session closed for user root
Dec 01 09:33:40 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 09:33:40 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 09:33:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.170 seconds)
Dec 01 10:03:40 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 10:03:40 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 10:03:40 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.185 seconds)
Dec 01 10:17:01 pbs CRON[2834]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 01 10:17:01 pbs CRON[2835]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 01 10:17:01 pbs CRON[2834]: pam_unix(cron:session): session closed for user root
Dec 01 10:24:14 pbs systemd[1]: Starting Daily apt download activities...
Dec 01 10:24:14 pbs systemd[1]: apt-daily.service: Succeeded.
Dec 01 10:24:14 pbs systemd[1]: Finished Daily apt download activities.
Dec 01 10:33:40 pbs proxmox-backup-proxy[838]: write rrd data back to disk
Dec 01 10:33:41 pbs proxmox-backup-proxy[838]: starting rrd data sync
Dec 01 10:33:41 pbs proxmox-backup-proxy[838]: rrd journal successfully committed (23 files in 0.318 seconds)
Dec 01 10:51:24 pbs proxmox-backup-proxy[838]: error during snapshot file listing: 'unable to load blob '"/mnt/datastore/backup1/vm/203/2021-12-01T03:00:01Z/index.json.blo>
 
Last edited:
das journal vom PVE host auf der die PBS VM laeuft fuer den betreffenden zeitraum (journalctl --since "2021-12-01 10:33" z.b. (plus die Info, welche VMID die PBS VM hat ;))
 
Ah! Ja das gibt Sinn. Direkt den Fehler gefunden:

Code:
Dec 01 15:15:01 vm2 systemd[1]: Started Proxmox VE replication runner.
Dec 01 15:15:21 vm2 kernel: tp_osd_tp invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Dec 01 15:15:21 vm2 kernel: CPU: 6 PID: 2795187 Comm: tp_osd_tp Tainted: P           O      5.4.140-1-pve #1
Dec 01 15:15:21 vm2 kernel: Hardware name: Micro-Star International Co., Ltd MS-7C02/B450 TOMAHAWK MAX (MS-7C02), BIOS 3.60 04/22/2020
Dec 01 15:15:21 vm2 kernel: Call Trace:
Dec 01 15:15:21 vm2 kernel:  dump_stack+0x6d/0x8b
Dec 01 15:15:21 vm2 kernel:  dump_header+0x4f/0x1e1
Dec 01 15:15:21 vm2 kernel:  oom_kill_process.cold.33+0xb/0x10
Dec 01 15:15:21 vm2 kernel:  out_of_memory+0x1ad/0x490
Dec 01 15:15:21 vm2 kernel:  __alloc_pages_slowpath+0xd40/0xe30
Dec 01 15:15:21 vm2 kernel:  __alloc_pages_nodemask+0x2df/0x330
Dec 01 15:15:21 vm2 kernel:  alloc_pages_current+0x81/0xe0
Dec 01 15:15:21 vm2 kernel:  __page_cache_alloc+0x6a/0xa0
Dec 01 15:15:21 vm2 kernel:  pagecache_get_page+0xbe/0x2e0
Dec 01 15:15:21 vm2 kernel:  ? zpl_readpage+0x9f/0xe0 [zfs]
Dec 01 15:15:21 vm2 kernel:  filemap_fault+0x887/0xa70
Dec 01 15:15:21 vm2 kernel:  ? page_add_file_rmap+0x131/0x190
Dec 01 15:15:21 vm2 kernel:  ? alloc_set_pte+0x4e9/0x5c0
Dec 01 15:15:21 vm2 kernel:  ? get_futex_key+0x316/0x3f0
Dec 01 15:15:21 vm2 kernel:  ? filemap_map_pages+0x28d/0x3b0
Dec 01 15:15:21 vm2 kernel:  __do_fault+0x3c/0x130
Dec 01 15:15:21 vm2 kernel:  __handle_mm_fault+0xe75/0x12a0
Dec 01 15:15:21 vm2 kernel:  handle_mm_fault+0xc9/0x1f0
Dec 01 15:15:21 vm2 kernel:  __do_page_fault+0x233/0x4c0
Dec 01 15:15:21 vm2 kernel:  do_page_fault+0x2c/0xe0
Dec 01 15:15:21 vm2 kernel:  page_fault+0x34/0x40
Dec 01 15:15:21 vm2 kernel: RIP: 0033:0x55af90d2d363
Dec 01 15:15:21 vm2 kernel: Code: 1f 84 00 00 00 00 00 b8 01 00 00 00 c3 66 2e 0f 1f 84 00 00 00 00 00 66 81 7e 28 97 00 77 28 0f b7 46 28 48 8d 15 c9 e5 c6 00 <48> 63 04 8
Dec 01 15:15:21 vm2 kernel: RSP: 002b:00007f7aff1befe8 EFLAGS: 00010293
Dec 01 15:15:21 vm2 kernel: RAX: 0000000000000070 RBX: 000055afb06dc000 RCX: 0000000000000000
Dec 01 15:15:21 vm2 kernel: RDX: 000055af9199b92c RSI: 000055afe4d30000 RDI: 000055af9d0d6000
Dec 01 15:15:21 vm2 kernel: RBP: 000055afe4d30000 R08: 000055afb0591628 R09: 0000000000002362
Dec 01 15:15:21 vm2 kernel: R10: 00000002ad810000 R11: 000055afb06aef00 R12: 000055afabcf79f8
Dec 01 15:15:21 vm2 kernel: R13: 000055af9c359d98 R14: 000055afabcf7a00 R15: 00007f7aff1bf020
Dec 01 15:15:21 vm2 kernel: Mem-Info:
Dec 01 15:15:21 vm2 kernel: active_anon:3640713 inactive_anon:41328 isolated_anon:0
                             active_file:238 inactive_file:441 isolated_file:0
                             unevictable:40309 dirty:0 writeback:3 unstable:0
                             slab_reclaimable:28013 slab_unreclaimable:106013
                             mapped:25599 shmem:58460 pagetables:10458 bounce:0
                             free:33256 free_pcp:908 free_cma:0
Dec 01 15:15:21 vm2 kernel: Node 0 active_anon:14562852kB inactive_anon:165312kB active_file:952kB inactive_file:1200kB unevictable:161236kB isolated(anon):0kB isolated(fil
Dec 01 15:15:21 vm2 kernel: Node 0 DMA free:15884kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepen
Dec 01 15:15:21 vm2 kernel: lowmem_reserve[]: 0 3421 15888 15888 15888
Dec 01 15:15:21 vm2 kernel: Node 0 DMA32 free:64224kB min:14536kB low:18168kB high:21800kB active_anon:3336156kB inactive_anon:23804kB active_file:420kB inactive_file:188kB
Dec 01 15:15:21 vm2 kernel: lowmem_reserve[]: 0 0 12467 12467 12467
Dec 01 15:15:21 vm2 kernel: Node 0 Normal free:52916kB min:52980kB low:66224kB high:79468kB active_anon:11226696kB inactive_anon:141508kB active_file:564kB inactive_file:28
Dec 01 15:15:21 vm2 kernel: lowmem_reserve[]: 0 0 0 0 0
Dec 01 15:15:21 vm2 kernel: Node 0 DMA: 1*4kB (U) 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15884kB
Dec 01 15:15:21 vm2 kernel: Node 0 DMA32: 2607*4kB (UE) 1992*8kB (UME) 1314*16kB (UME) 479*32kB (UME) 26*64kB (UM) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 6438
Dec 01 15:15:21 vm2 kernel: Node 0 Normal: 279*4kB (U) 44*8kB (U) 1562*16kB (U) 819*32kB (UE) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 52668kB
Dec 01 15:15:21 vm2 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Dec 01 15:15:21 vm2 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Dec 01 15:15:21 vm2 kernel: 61803 total pagecache pages
Dec 01 15:15:21 vm2 kernel: 0 pages in swap cache
Dec 01 15:15:21 vm2 kernel: Swap cache stats: add 0, delete 0, find 0/0
Dec 01 15:15:21 vm2 kernel: Free swap  = 0kB
Dec 01 15:15:21 vm2 kernel: Total swap = 0kB
Dec 01 15:15:21 vm2 kernel: 4181793 pages RAM
Dec 01 15:15:21 vm2 kernel: 0 pages HighMem/MovableOnly
Dec 01 15:15:21 vm2 kernel: 79901 pages reserved
Dec 01 15:15:21 vm2 kernel: 0 pages cma reserved
Dec 01 15:15:21 vm2 kernel: 0 pages hwpoisoned
Dec 01 15:15:21 vm2 kernel: Tasks state (memory values in pages):
Dec 01 15:15:21 vm2 kernel: [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
Dec 01 15:15:21 vm2 kernel: [   1022]     0  1022    20150     8356   192512        0             0 systemd-journal
Dec 01 15:15:21 vm2 kernel: [   1033]     0  1033     5714      549    69632        0         -1000 systemd-udevd
Dec 01 15:15:21 vm2 kernel: [   1256]   106  1256     1705      470    49152        0             0 rpcbind
Dec 01 15:15:21 vm2 kernel: [   1257]   100  1257    23270      237    86016        0             0 systemd-timesyn
Dec 01 15:15:21 vm2 kernel: [   1285]     0  1285     3137      786    57344        0             0 smartd
Dec 01 15:15:21 vm2 kernel: [   1291]     0  1291     5376     1393    81920        0             0 ceph-crash
Dec 01 15:15:21 vm2 kernel: [   1296]     0  1296    56455      759    86016        0             0 rsyslogd
Dec 01 15:15:21 vm2 kernel: [   1297]     0  1297     1022      328    49152        0             0 qmeventd
Dec 01 15:15:21 vm2 kernel: [   1301]     0  1301    41547      597    81920        0             0 zed
Dec 01 15:15:21 vm2 kernel: [   1303]     0  1303   102756      363    94208        0             0 pve-lxc-syscall
Dec 01 15:15:21 vm2 kernel: [   1305]   104  1305     2288      542    53248        0          -900 dbus-daemon
Dec 01 15:15:21 vm2 kernel: [   1315]     0  1315      535      117    36864        0         -1000 watchdog-mux
Dec 01 15:15:21 vm2 kernel: [   1323]     0  1323    56184      355    69632        0             0 lxcfs
Dec 01 15:15:21 vm2 kernel: [   1324]     0  1324     4879      674    77824        0             0 systemd-logind
Dec 01 15:15:21 vm2 kernel: [   1332]     0  1332     1681      336    49152        0             0 ksmtuned
Dec 01 15:15:21 vm2 kernel: [   1430]     0  1430     1823      207    53248        0             0 lxc-monitord
Dec 01 15:15:21 vm2 kernel: [   1441]     0  1441      568      141    40960        0             0 none
Dec 01 15:15:21 vm2 kernel: [   1445]     0  1445     1722       62    49152        0             0 iscsid
Dec 01 15:15:21 vm2 kernel: [   1447]     0  1447     1848     1256    53248        0           -17 iscsid
Dec 01 15:15:21 vm2 kernel: [   1451]     0  1451     3962      744    81920        0         -1000 sshd
Dec 01 15:15:21 vm2 kernel: [   1474]     0  1474     1402      400    45056        0             0 agetty
Dec 01 15:15:21 vm2 kernel: [   1563]     0  1563   183191      741   192512        0             0 rrdcached
Dec 01 15:15:21 vm2 kernel: [   1576]     0  1576   173687    16567   458752        0             0 pmxcfs
Dec 01 15:15:21 vm2 kernel: [   1681]     0  1681    10868      661    81920        0             0 master
Dec 01 15:15:21 vm2 kernel: [   1683]   107  1683    10968      644    77824        0             0 qmgr
Dec 01 15:15:21 vm2 kernel: [   1692]     0  1692   143262    44385   425984        0             0 corosync
Dec 01 15:15:21 vm2 kernel: [   1695] 64045  1695   443573   279117  2998272        0             0 ceph-mon
Dec 01 15:15:21 vm2 kernel: [   1703]     0  1703     2125      551    53248        0             0 cron
Dec 01 15:15:21 vm2 kernel: [   1912]     0  1912    69080    21758   294912        0             0 pve-firewall
Dec 01 15:15:21 vm2 kernel: [   1915]     0  1915    71323    24150   327680        0             0 pvestatd
Dec 01 15:15:21 vm2 kernel: [   1938]     0  1938    89247    30224   417792        0             0 pvedaemon
Dec 01 15:15:21 vm2 kernel: [   1947]     0  1947    84903    24464   376832        0             0 pve-ha-crm
Dec 01 15:15:21 vm2 kernel: [   1948]    33  1948    89609    31271   434176        0             0 pveproxy
Dec 01 15:15:21 vm2 kernel: [   1954]    33  1954    17634    12472   192512        0             0 spiceproxy
Dec 01 15:15:21 vm2 kernel: [   1956]     0  1956    84827    24541   380928        0             0 pve-ha-lrm
Dec 01 15:15:21 vm2 kernel: [   5630]     0  5630   573708   157516  2150400        0             0 kvm
Dec 01 15:15:21 vm2 kernel: [  40757] 64045 40757   234740    55112   884736        0             0 ceph-mgr
Dec 01 15:15:21 vm2 kernel: [3778744]     0 3778744  1025310   571161  5799936        0             0 kvm
Dec 01 15:15:21 vm2 kernel: [1596616]     0 1596616    21543      338    65536        0             0 pvefw-logger
Dec 01 15:15:21 vm2 kernel: [1596619]    33 1596619    17692    12311   192512        0             0 spiceproxy work
Dec 01 15:15:21 vm2 kernel: [2583292]     0 2583292    91405    31448   438272        0             0 pvedaemon worke
Dec 01 15:15:21 vm2 kernel: [2632594]    33 2632594    91671    31610   434176        0             0 pveproxy worker
Dec 01 15:15:21 vm2 kernel: [2636246]    33 2636246    91710    31714   434176        0             0 pveproxy worker
Dec 01 15:15:21 vm2 kernel: [2643453]     0 2643453    91344    31240   438272        0             0 pvedaemon worke
Dec 01 15:15:21 vm2 kernel: [2742237] 64045 2742237  1265073  1049326  9203712        0             0 ceph-osd
Dec 01 15:15:21 vm2 kernel: [2753283]    33 2753283    91680    31561   434176        0             0 pveproxy worker
Dec 01 15:15:21 vm2 kernel: [2766250]     0 2766250    91374    31139   438272        0             0 pvedaemon worke
Dec 01 15:15:21 vm2 kernel: [2794836] 64045 2794836   954387   744000  6656000        0             0 ceph-osd
Dec 01 15:15:21 vm2 kernel: [2796908]     0 2796908  3049615   785273  7569408        0             0 kvm
Dec 01 15:15:21 vm2 kernel: [3067994]   107 3067994    10958      712    77824        0             0 pickup
Dec 01 15:15:21 vm2 kernel: [3197833]     0 3197833     4253      945    73728        0             0 sshd
Dec 01 15:15:21 vm2 kernel: [3198087]     0 3198087     4224      897    69632        0             0 sshd
Dec 01 15:15:21 vm2 kernel: [3198097]     0 3198097     5318     1035    81920        0             0 systemd
Dec 01 15:15:21 vm2 kernel: [3198098]     0 3198098    43027      819   106496        0             0 (sd-pam)
Dec 01 15:15:21 vm2 kernel: [3198219]     0 3198219     1730      586    53248        0             0 login
Dec 01 15:15:21 vm2 kernel: [3198224]     0 3198224     1945      728    53248        0             0 bash
Dec 01 15:15:21 vm2 kernel: [3198872]     0 3198872     1945      724    53248        0             0 bash
Dec 01 15:15:21 vm2 kernel: [3199841]     0 3199841    52809     2303   262144        0             0 journalctl
Dec 01 15:15:21 vm2 kernel: [3199842]     0 3199842     1399      161    53248        0             0 pager
Dec 01 15:15:21 vm2 kernel: [3200652]     0 3200652     1314      168    49152        0             0 sleep
Dec 01 15:15:21 vm2 kernel: [3201143]     0 3201143     5714      468    61440        0             0 systemd-udevd
Dec 01 15:15:21 vm2 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/system-ceph\x2dosd.slice/ceph-os
Dec 01 15:15:21 vm2 kernel: Out of memory: Killed process 2742237 (ceph-osd) total-vm:5060292kB, anon-rss:4194712kB, file-rss:2592kB, shmem-rss:0kB, UID:64045 pgtables:8988
Dec 01 15:15:21 vm2 kernel: oom_reaper: reaped process 2742237 (ceph-osd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Dec 01 15:15:21 vm2 systemd[1]: ceph-osd@2.service: Main process exited, code=killed, status=9/KILL
Dec 01 15:15:21 vm2 systemd[1]: ceph-osd@2.service: Failed with result 'signal'.
Dec 01 15:15:22 vm2 systemd[1]: ceph-osd@2.service: Service RestartSec=100ms expired, scheduling restart.
Dec 01 15:15:22 vm2 systemd[1]: ceph-osd@2.service: Scheduled restart job, restart counter is at 7.
Dec 01 15:15:22 vm2 systemd[1]: Stopped Ceph object storage daemon osd.2.
Dec 01 15:15:22 vm2 systemd[1]: Starting Ceph object storage daemon osd.2...
Dec 01 15:15:22 vm2 systemd[1]: Started Ceph object storage daemon osd.2.

16 GB hat der Proxmox Host akutell. Ich vermute, das ist zu wenig. Rüste diesen dann mal auf 64 GB auf.

Verstehe zwar noch nicht ganz, warum das wirklich NUR beim verify passiert aber solange wir das mit mehr RAM bekämpfen können, bin ich erstmal versorgt.
 
Last edited:
Part 2 - wegen 15000 Zeichen:


Code:
Dec 01 15:15:32 vm2 ceph-osd[3201360]: 2021-12-01 15:15:32.135 7ff2af8e1c80 -1 osd.2 5402 log_to_monitors {default=true}
Dec 01 15:15:32 vm2 ceph-osd[3201360]: 2021-12-01 15:15:32.327 7ff2a8beb700 -1 osd.2 5402 set_numa_affinity unable to identify public interface '' numa node: (2) No such fi
Dec 01 15:16:00 vm2 systemd[1]: Starting Proxmox VE replication runner...
Dec 01 15:16:01 vm2 systemd[1]: pvesr.service: Succeeded.
Dec 01 15:16:01 vm2 systemd[1]: Started Proxmox VE replication runner.
Dec 01 15:16:36 vm2 kernel: crash invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Dec 01 15:16:36 vm2 kernel: CPU: 8 PID: 3148936 Comm: crash Tainted: P           O      5.4.140-1-pve #1
Dec 01 15:15:21 vm2 kernel: [   1956]     0  1956    84827    24541   380928        0             0 pve-ha-lrm
Dec 01 15:15:21 vm2 kernel: [   5630]     0  5630   573708   157516  2150400        0             0 kvm
Dec 01 15:15:21 vm2 kernel: [  40757] 64045 40757   234740    55112   884736        0             0 ceph-mgr
Dec 01 15:15:21 vm2 kernel: [3778744]     0 3778744  1025310   571161  5799936        0             0 kvm
Dec 01 15:15:21 vm2 kernel: [1596616]     0 1596616    21543      338    65536        0             0 pvefw-logger
Dec 01 15:15:21 vm2 kernel: [1596619]    33 1596619    17692    12311   192512        0             0 spiceproxy work
Dec 01 15:15:21 vm2 kernel: [2583292]     0 2583292    91405    31448   438272        0             0 pvedaemon worke
Dec 01 15:15:21 vm2 kernel: [2632594]    33 2632594    91671    31610   434176        0             0 pveproxy worker
Dec 01 15:15:21 vm2 kernel: [2636246]    33 2636246    91710    31714   434176        0             0 pveproxy worker
Dec 01 15:15:21 vm2 kernel: [2643453]     0 2643453    91344    31240   438272        0             0 pvedaemon worke
Dec 01 15:15:21 vm2 kernel: [2742237] 64045 2742237  1265073  1049326  9203712        0             0 ceph-osd
Dec 01 15:15:21 vm2 kernel: [2753283]    33 2753283    91680    31561   434176        0             0 pveproxy worker
Dec 01 15:15:21 vm2 kernel: [2766250]     0 2766250    91374    31139   438272        0             0 pvedaemon worke
Dec 01 15:15:21 vm2 kernel: [2794836] 64045 2794836   954387   744000  6656000        0             0 ceph-osd
Dec 01 15:15:21 vm2 kernel: [2796908]     0 2796908  3049615   785273  7569408        0             0 kvm
Dec 01 15:15:21 vm2 kernel: [3067994]   107 3067994    10958      712    77824        0             0 pickup
Dec 01 15:15:21 vm2 kernel: [3197833]     0 3197833     4253      945    73728        0             0 sshd
Dec 01 15:15:21 vm2 kernel: [3198087]     0 3198087     4224      897    69632        0             0 sshd
Dec 01 15:15:21 vm2 kernel: [3198097]     0 3198097     5318     1035    81920        0             0 systemd
Dec 01 15:15:21 vm2 kernel: [3198098]     0 3198098    43027      819   106496        0             0 (sd-pam)
Dec 01 15:15:21 vm2 kernel: [3198219]     0 3198219     1730      586    53248        0             0 login
Dec 01 15:15:21 vm2 kernel: [3198224]     0 3198224     1945      728    53248        0             0 bash
Dec 01 15:15:21 vm2 kernel: [3198872]     0 3198872     1945      724    53248        0             0 bash
Dec 01 15:15:21 vm2 kernel: [3199841]     0 3199841    52809     2303   262144        0             0 journalctl
Dec 01 15:15:21 vm2 kernel: [3199842]     0 3199842     1399      161    53248        0             0 pager
Dec 01 15:15:21 vm2 kernel: [3200652]     0 3200652     1314      168    49152        0             0 sleep
Dec 01 15:15:21 vm2 kernel: [3201143]     0 3201143     5714      468    61440        0             0 systemd-udevd
Dec 01 15:15:21 vm2 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/system-ceph\x2dosd.slice/ceph-os
Dec 01 15:15:21 vm2 kernel: Out of memory: Killed process 2742237 (ceph-osd) total-vm:5060292kB, anon-rss:4194712kB, file-rss:2592kB, shmem-rss:0kB, UID:64045 pgtables:8988
Dec 01 15:15:21 vm2 kernel: oom_reaper: reaped process 2742237 (ceph-osd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Dec 01 15:15:21 vm2 systemd[1]: ceph-osd@2.service: Main process exited, code=killed, status=9/KILL
Dec 01 15:15:21 vm2 systemd[1]: ceph-osd@2.service: Failed with result 'signal'.
Dec 01 15:15:22 vm2 systemd[1]: ceph-osd@2.service: Service RestartSec=100ms expired, scheduling restart.
Dec 01 15:15:22 vm2 systemd[1]: ceph-osd@2.service: Scheduled restart job, restart counter is at 7.
Dec 01 15:15:22 vm2 systemd[1]: Stopped Ceph object storage daemon osd.2.
Dec 01 15:15:22 vm2 systemd[1]: Starting Ceph object storage daemon osd.2...
Dec 01 15:15:22 vm2 systemd[1]: Started Ceph object storage daemon osd.2.
Dec 01 15:15:32 vm2 ceph-osd[3201360]: 2021-12-01 15:15:32.135 7ff2af8e1c80 -1 osd.2 5402 log_to_monitors {default=true}
Dec 01 15:15:32 vm2 ceph-osd[3201360]: 2021-12-01 15:15:32.327 7ff2a8beb700 -1 osd.2 5402 set_numa_affinity unable to identify public interface '' numa node: (2) No such fi
Dec 01 15:16:00 vm2 systemd[1]: Starting Proxmox VE replication runner...
Dec 01 15:16:01 vm2 systemd[1]: pvesr.service: Succeeded.
Dec 01 15:16:01 vm2 systemd[1]: Started Proxmox VE replication runner.
Dec 01 15:16:36 vm2 kernel: crash invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Dec 01 15:16:36 vm2 kernel: CPU: 8 PID: 3148936 Comm: crash Tainted: P           O      5.4.140-1-pve #1
Dec 01 15:16:36 vm2 kernel: Hardware name: Micro-Star International Co., Ltd MS-7C02/B450 TOMAHAWK MAX (MS-7C02), BIOS 3.60 04/22/2020
Dec 01 15:16:36 vm2 kernel: Call Trace:
Dec 01 15:16:36 vm2 kernel:  dump_stack+0x6d/0x8b
Dec 01 15:16:36 vm2 kernel:  dump_header+0x4f/0x1e1
Dec 01 15:16:36 vm2 kernel:  oom_kill_process.cold.33+0xb/0x10
Dec 01 15:16:36 vm2 kernel:  out_of_memory+0x1ad/0x490
Dec 01 15:16:36 vm2 kernel:  __alloc_pages_slowpath+0xd40/0xe30
Dec 01 15:16:36 vm2 kernel:  ? __switch_to_asm+0x34/0x70
Dec 01 15:16:36 vm2 kernel:  __alloc_pages_nodemask+0x2df/0x330
Dec 01 15:16:36 vm2 kernel:  alloc_pages_current+0x81/0xe0
Dec 01 15:16:36 vm2 kernel:  __page_cache_alloc+0x6a/0xa0
Dec 01 15:16:36 vm2 kernel:  pagecache_get_page+0xbe/0x2e0
Dec 01 15:16:36 vm2 kernel:  filemap_fault+0x887/0xa70
Dec 01 15:16:36 vm2 kernel:  ? xas_load+0xc/0x80
Dec 01 15:16:36 vm2 kernel:  ? xas_find+0x17e/0x1b0
Dec 01 15:16:36 vm2 kernel:  ? filemap_map_pages+0x28d/0x3b0
Dec 01 15:16:36 vm2 kernel:  __do_fault+0x3c/0x130
Dec 01 15:16:36 vm2 kernel:  __handle_mm_fault+0xe75/0x12a0
Dec 01 15:16:36 vm2 kernel:  handle_mm_fault+0xc9/0x1f0
Dec 01 15:16:36 vm2 kernel:  __do_page_fault+0x233/0x4c0
Dec 01 15:16:36 vm2 kernel:  do_page_fault+0x2c/0xe0
Dec 01 15:16:36 vm2 kernel:  page_fault+0x34/0x40
Dec 01 15:16:36 vm2 kernel: RIP: 0033:0x7f20a1380b80
Dec 01 15:16:36 vm2 kernel: Code: Bad RIP value.
Dec 01 15:16:36 vm2 kernel: RSP: 002b:00007f208d9638b8 EFLAGS: 00010246
Dec 01 15:16:36 vm2 kernel: RAX: 0000000000000000 RBX: 000055fdd1ea5ce0 RCX: 00007f20a0235037
Dec 01 15:16:36 vm2 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000055fdd1ea5ce0
Dec 01 15:16:36 vm2 kernel: RBP: 00007f2094cc9ebb R08: 00007f208d9638e0 R09: 0000000000000000
Dec 01 15:16:36 vm2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00007f209100a890
Dec 01 15:16:36 vm2 kernel: R13: 00007f2095c59060 R14: 000055fdd1ea5ce0 R15: 00007f2095bb0ea8
Dec 01 15:16:36 vm2 kernel: Mem-Info:
Dec 01 15:16:36 vm2 kernel: active_anon:3630774 inactive_anon:41328 isolated_anon:0
                             active_file:0 inactive_file:235 isolated_file:0
                             unevictable:40309 dirty:3 writeback:10 unstable:0
                             slab_reclaimable:26472 slab_unreclaimable:116320
                             mapped:25227 shmem:58460 pagetables:10325 bounce:0
                             free:35033 free_pcp:602 free_cma:0
Dec 01 15:16:36 vm2 kernel: Node 0 active_anon:14523356kB inactive_anon:165312kB active_file:0kB inactive_file:236kB unevictable:161236kB isolated(anon):0kB isolated(file):
Dec 01 15:16:36 vm2 kernel: Node 0 DMA free:15884kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepen
Dec 01 15:16:36 vm2 kernel: lowmem_reserve[]: 0 3421 15888 15888 15888
Dec 01 15:16:36 vm2 kernel: Node 0 DMA32 free:67864kB min:14536kB low:18168kB high:21800kB active_anon:3342084kB inactive_anon:23804kB active_file:0kB inactive_file:0kB une
Dec 01 15:16:36 vm2 kernel: lowmem_reserve[]: 0 0 12467 12467 12467
Dec 01 15:16:36 vm2 kernel: Node 0 Normal free:52968kB min:52980kB low:66224kB high:79468kB active_anon:11181516kB inactive_anon:141508kB active_file:0kB inactive_file:0kB
Dec 01 15:16:36 vm2 kernel: lowmem_reserve[]: 0 0 0 0 0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!