Hey!
I've recently had an issue with one of my PBS. In short: I'm running a PBS at Hetzner which has a Storage Box mounted via CIFS under its own user:
PBS <-> CIFS <-> Storage Box
Then, yesterday, in the root-directory of one of my storages, i've found a weired file with chinese letters. 0-byte in length. Usually, the only files in there are .gc-status (for the last garbage-collection status) and the .lock file for process locking. The file says
I've checked all my systems for any signs of beeing compromised but nothing. All systems are hardened, ports are non-standard, no bruteforce-attacks logged via f2b, passwordless-logins only (keys). So i'm kinda sure my security is as good as it can be at this point. Hetzner told me that this file has been accessed a couple of times by the user of this Storage share, nothing unusual.
At the time of that specific file creation time, i also had a kernel-panic & loss of storage connectivity on this PBS-host, while having active backups running from Proxmox Servers into this specific share. Thats why i think that this occurance might be related to why that file is there in the first place. Maybe some process got nuked and wrote gibberish on the storage share. But why into the root-folder and not into the chunk-folders?
Kernel-Panic at the time of creation of the 'chinese file' in my backup-folders-directory root:
Here are my questions:
I've recently had an issue with one of my PBS. In short: I'm running a PBS at Hetzner which has a Storage Box mounted via CIFS under its own user:
PBS <-> CIFS <-> Storage Box
Then, yesterday, in the root-directory of one of my storages, i've found a weired file with chinese letters. 0-byte in length. Usually, the only files in there are .gc-status (for the last garbage-collection status) and the .lock file for process locking. The file says
㔂랒ꊔⶵ᧨塚욿ፘ삠析䢹촦㏐⃔㖴킪兌⋀䒃Ꙫ詄矲ࠝꏯ姄醹纅㔨ݓ≉䛚疷࿓橖썂ꓺ䴍⋫枽꽇쥚ጰɢ鲰鬂ч⳩̾첶ↄ쎝枭蜨䇋뱱ﱼ㦋㙔㓝메곻ﻑ浩ⱒ䨱牣瞄誓ꛏԁ鹗户
which translates to gibberish with a few words in it like ANALYSIS, target, verschwört, GG.I've checked all my systems for any signs of beeing compromised but nothing. All systems are hardened, ports are non-standard, no bruteforce-attacks logged via f2b, passwordless-logins only (keys). So i'm kinda sure my security is as good as it can be at this point. Hetzner told me that this file has been accessed a couple of times by the user of this Storage share, nothing unusual.
At the time of that specific file creation time, i also had a kernel-panic & loss of storage connectivity on this PBS-host, while having active backups running from Proxmox Servers into this specific share. Thats why i think that this occurance might be related to why that file is there in the first place. Maybe some process got nuked and wrote gibberish on the storage share. But why into the root-folder and not into the chunk-folders?
Kernel-Panic at the time of creation of the 'chinese file' in my backup-folders-directory root:
Bash:
Aug 27 17:04:14 pbs1 proxmox-backup-proxy[605]: upload_chunk done: 1713701 bytes, b435ec32e71fa6d93238524a2a870eafde790a3e4aac>
Aug 27 17:04:14 pbs1 proxmox-backup-proxy[605]: upload_chunk done: 1562266 bytes, 49949c88fbde6bd8ec366897832b795dbe80d8073a8e>
Aug 27 17:04:28 pbs1 kernel: CIFS: VFS: No writable handle to retry writepages rc=-22
Aug 27 17:04:30 pbs1 proxmox-backup-proxy[605]: POST /dynamic_chunk
Aug 27 17:04:38 pbs1 proxmox-backup-proxy[605]: POST /dynamic_chunk
Aug 27 17:04:43 pbs1 proxmox-backup-proxy[605]: POST /dynamic_chunk
Aug 27 17:04:47 pbs1 proxmox-backup-proxy[605]: POST /dynamic_chunk
Aug 27 17:04:47 pbs1 proxmox-backup-proxy[605]: POST /dynamic_chunk
Aug 27 17:08:18 pbs1 kernel: INFO: task tokio-runtime-w:617 blocked for more than 122 seconds.
Aug 27 17:08:18 pbs1 kernel: Tainted: P O 6.8.8-3-pve #1
Aug 27 17:08:18 pbs1 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 27 17:08:18 pbs1 kernel: task:tokio-runtime-w state:D stack:0 pid:617 tgid:605 ppid:1 flags:0x00000002
Aug 27 17:08:18 pbs1 kernel: Call Trace:
Aug 27 17:08:18 pbs1 kernel: <TASK>
Aug 27 17:08:18 pbs1 kernel: __schedule+0x401/0x15e0
Aug 27 17:08:18 pbs1 kernel: ? __mod_timer+0x27a/0x390
Aug 27 17:08:18 pbs1 kernel: schedule+0x33/0x110
Aug 27 17:08:18 pbs1 kernel: io_schedule+0x46/0x80
Aug 27 17:08:18 pbs1 kernel: folio_wait_bit_common+0x136/0x330
Aug 27 17:08:18 pbs1 kernel: ? __pfx_wake_page_function+0x10/0x10
Aug 27 17:08:18 pbs1 kernel: folio_wait_bit+0x18/0x30
Aug 27 17:08:18 pbs1 kernel: folio_wait_writeback+0x2b/0xa0
Aug 27 17:08:18 pbs1 kernel: __filemap_fdatawait_range+0x90/0x100
Aug 27 17:08:18 pbs1 kernel: filemap_write_and_wait_range+0x94/0xc0
Aug 27 17:08:18 pbs1 kernel: cifs_flush+0x9a/0x140 [cifs]
Aug 27 17:08:18 pbs1 kernel: filp_flush+0x35/0x90
Aug 27 17:08:18 pbs1 kernel: __x64_sys_close+0x34/0x90
Aug 27 17:08:18 pbs1 kernel: x64_sys_call+0x1a20/0x24b0
Aug 27 17:08:18 pbs1 kernel: do_syscall_64+0x81/0x170
Aug 27 17:08:18 pbs1 kernel: ? __f_unlock_pos+0x12/0x20
Aug 27 17:08:18 pbs1 kernel: ? ksys_write+0xe6/0x100
Aug 27 17:08:18 pbs1 kernel: ? syscall_exit_to_user_mode+0x89/0x260
Aug 27 17:08:18 pbs1 kernel: ? do_syscall_64+0x8d/0x170
Aug 27 17:08:18 pbs1 kernel: ? cifs_setattr+0x675/0xfd0 [cifs]
Aug 27 17:08:18 pbs1 kernel: ? evm_inode_setattr+0x69/0x170
Aug 27 17:08:18 pbs1 kernel: ? notify_change+0x45b/0x500
Aug 27 17:08:18 pbs1 kernel: ? chmod_common+0xd0/0x1a0
Aug 27 17:08:18 pbs1 kernel: ? chmod_common+0x140/0x1a0
Aug 27 17:08:18 pbs1 kernel: ? syscall_exit_to_user_mode+0x89/0x260
Aug 27 17:08:18 pbs1 kernel: ? do_syscall_64+0x8d/0x170
Aug 27 17:08:18 pbs1 kernel: ? do_user_addr_fault+0x343/0x6b0
Aug 27 17:08:18 pbs1 kernel: ? irqentry_exit_to_user_mode+0x7e/0x260
Aug 27 17:08:18 pbs1 kernel: ? irqentry_exit+0x43/0x50
Aug 27 17:08:18 pbs1 kernel: ? clear_bhb_loop+0x15/0x70
Aug 27 17:08:18 pbs1 kernel: ? clear_bhb_loop+0x15/0x70
Aug 27 17:08:18 pbs1 kernel: ? clear_bhb_loop+0x15/0x70
Aug 27 17:08:18 pbs1 kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
Aug 27 17:08:18 pbs1 kernel: RIP: 0033:0x7e7f6ae5b90a
Aug 27 17:08:18 pbs1 kernel: RSP: 002b:00007e7f6a1fdc80 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
Aug 27 17:08:18 pbs1 kernel: RAX: ffffffffffffffda RBX: 00007e7f58001220 RCX: 00007e7f6ae5b90a
Aug 27 17:08:18 pbs1 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000019
Aug 27 17:08:18 pbs1 kernel: RBP: 0000000000000084 R08: 8080808080808080 R09: 0003030700030001
Aug 27 17:08:18 pbs1 kernel: R10: 2c5a96a1d39df8e1 R11: 0000000000000293 R12: 8000000000000000
Aug 27 17:08:18 pbs1 kernel: R13: 0000000000000085 R14: 0000000000000019 R15: 0000000000000019
Aug 27 17:08:18 pbs1 kernel: </TASK>
Aug 27 17:10:21 pbs1 kernel: INFO: task tokio-runtime-w:617 blocked for more than 245 seconds.
Aug 27 17:10:21 pbs1 kernel: Tainted: P O 6.8.8-3-pve #1
Aug 27 17:10:21 pbs1 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 27 17:10:21 pbs1 kernel: task:tokio-runtime-w state:D stack:0 pid:617 tgid:605 ppid:1 flags:0x00000002
Aug 27 17:10:21 pbs1 kernel: Call Trace:
Aug 27 17:10:21 pbs1 kernel: <TASK>
...
Here are my questions:
- What is the possibility that a crashed PBS writes a file into the Backup-Folders root directory, other then .gc-collect and .lock? Could there be any possibilty or could it be a malformed/malnamed .lock file?
- At 17:04:14 the filesystem went to narnia. Could possibly data which has been already transferred but not fully written onto the storage share end up in the root folder instead of the namespaces/chunk folders?
Last edited: