Backups causing server to go offline

kaipee

Member
Aug 11, 2020
3
0
6
www.kaipee.co.uk
I have a strange issue whereby scheduled backups (that have been configured and running for over a year) are now causing my physical server to go offline.

It is a 2 node cluster (I know, accepted) pve, pve2.
All VMs run on pve, only Proxmox Backup Server runs on pve2 (set up like this for testing the failures).
Proxmox Backup Server mounts an NFS share from TrueNAS.

Backups are scheduled to run daily at 02:30, pointing to PBS.
1706024084703.png

A number of backup jobs complete, until at some point the server shuts off.
Code:
...
Jan 23 03:06:57 pbs proxmox-backup-proxy[656]: starting new backup on datastore 'truenas-nfs': "vm/127/2024-01-23T08:06:55Z"
Jan 23 03:06:57 pbs proxmox-backup-proxy[656]: download 'index.json.blob' from previous backup.
Jan 23 03:06:57 pbs proxmox-backup-proxy[656]: register chunks in 'drive-scsi0.img.fidx' from previous backup.
Jan 23 03:06:57 pbs proxmox-backup-proxy[656]: download 'drive-scsi0.img.fidx' from previous backup.
Jan 23 03:06:57 pbs proxmox-backup-proxy[656]: created new fixed index 1 ("vm/127/2024-01-23T08:06:55Z/drive-scsi0.img.fidx")
Jan 23 03:06:57 pbs proxmox-backup-proxy[656]: add blob "/mnt/truenas/vm/127/2024-01-23T08:06:55Z/qemu-server.conf.blob" (396 bytes, comp: 396)
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: Upload statistics for 'drive-scsi0.img.fidx'
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: UUID: 2f837bbe75634a6eb8ac051245ae2a8d
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: Checksum: c138ebd31da3b5bd442cfd3fb7ef8ed9549469a3fe5fca5d2a816461c5a604bd
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: Size: 25769803776
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: Chunk count: 6144
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: Upload size: 4194304 (0%)
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: Duplicates: 6143+1 (100%)
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: Compression: 0%
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: successfully closed fixed index 1
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: add blob "/mnt/truenas/vm/127/2024-01-23T08:06:55Z/index.json.blob" (438 bytes, comp: 438)
Jan 23 03:07:22 pbs proxmox-backup-[656]: pbs proxmox-backup-proxy[656]: syncing filesystem
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: successfully finished backup
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: backup finished successfully
Jan 23 03:07:22 pbs proxmox-backup-proxy[656]: TASK OK
Jan 23 03:07:24 pbs proxmox-backup-proxy[656]: Upload backup log to datastore 'truenas-nfs', root namespace vm/127/2024-01-23T08:06:55Z/client.log.blob
Jan 23 03:07:24 pbs proxmox-backup-proxy[656]: starting new backup on datastore 'truenas-nfs': "vm/128/2024-01-23T08:07:24Z"
Jan 23 03:07:24 pbs proxmox-backup-proxy[656]: download 'index.json.blob' from previous backup.
Jan 23 03:07:24 pbs proxmox-backup-proxy[656]: register chunks in 'drive-scsi0.img.fidx' from previous backup.
Jan 23 03:07:24 pbs proxmox-backup-proxy[656]: download 'drive-scsi0.img.fidx' from previous backup.
Jan 23 03:07:24 pbs proxmox-backup-proxy[656]: created new fixed index 1 ("vm/128/2024-01-23T08:07:24Z/drive-scsi0.img.fidx")
Jan 23 03:07:24 pbs proxmox-backup-proxy[656]: add blob "/mnt/truenas/vm/128/2024-01-23T08:07:24Z/qemu-server.conf.blob" (405 bytes, comp: 405)
Jan 23 03:10:01 pbs CRON[19197]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jan 23 03:10:01 pbs CRON[19198]: (root) CMD (test -e /run/systemd/system || SERVICE_MODE=1 /sbin/e2scrub_all -A -r)
Jan 23 03:10:01 pbs CRON[19197]: pam_unix(cron:session): session closed for user root
Jan 23 03:17:01 pbs CRON[19203]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jan 23 03:17:01 pbs CRON[19204]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jan 23 03:17:01 pbs CRON[19203]: pam_unix(cron:session): session closed for user root
Jan 23 03:22:29 pbs proxmox-backup-proxy[656]: backup failed: connection error: timed out
Jan 23 03:22:29 pbs proxmox-backup-proxy[656]: removing failed backup
Jan 23 03:22:29 pbs proxmox-backup-[656]: pbs proxmox-backup-proxy[656]: removing backup snapshot "/mnt/truenas/vm/128/2024-01-23T08:07:24Z"
Jan 23 03:22:29 pbs proxmox-backup-proxy[656]: TASK ERROR: removing backup snapshot "/mnt/truenas/vm/128/2024-01-23T08:07:24Z" failed - Directory not empty (os error 39)
...

1706024911419.png

1706024934773.png

All Proxmox systems have plenty of resources.

PVE (frozen at time of crash)
1706025278832.png

PVE2
1706025335300.png

PBS
1706025360771.png

Pihole
(network activity showing drop off around 03AM when the server dies during backup)
1706025547315.png

I've been troubleshooting this for just over a week now. I thought it might be a problematic VM, so I turned everything off and ran each VM for a day 1-by-1.
For the past week nothing has shut down.
The last thing I did (last night) was to start PBS VM and re-enable backups. The server shut off again.
I'm pretty confident this is caused by the schedule backups somehow, I'm just not sure how that might cause the hardware to halt.
 
Update:

I've restarted the physical server for pve and noticed it's not starting the PVE Proxy server.

Code:
Jan 23 01:58:51 pve pveproxy[2452]: starting 1 worker(s)
Jan 23 01:58:51 pve pveproxy[2452]: worker 2453351 started
Jan 23 02:00:49 pve pveproxy[2386841]: worker exit
Jan 23 02:00:49 pve pveproxy[2452]: worker 2386841 finished
Jan 23 02:00:49 pve pveproxy[2452]: starting 1 worker(s)
Jan 23 02:00:49 pve pveproxy[2452]: worker 2454470 started
-- Boot 98f8f54e2f1a4d7394316876de2ce616 --
Jan 23 11:21:00 pve systemd[1]: Starting PVE API Proxy Server...
Jan 23 11:21:01 pve pvecm[2759]: ipcc_send_rec[1] failed: Connection refused
Jan 23 11:21:01 pve pvecm[2759]: ipcc_send_rec[2] failed: Connection refused
Jan 23 11:21:01 pve pvecm[2759]: ipcc_send_rec[3] failed: Connection refused
Jan 23 11:21:01 pve pvecm[2759]: Unable to load access control list: Connection refused
Jan 23 11:21:02 pve pveproxy[2792]: starting server
Jan 23 11:21:02 pve pveproxy[2792]: starting 3 worker(s)
Jan 23 11:21:02 pve pveproxy[2792]: worker 2793 started
Jan 23 11:21:02 pve pveproxy[2792]: worker 2794 started
Jan 23 11:21:02 pve pveproxy[2792]: worker 2795 started
Jan 23 11:21:02 pve systemd[1]: Started PVE API Proxy Server.
Jan 23 11:21:02 pve pveproxy[2794]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1998.
Jan 23 11:21:02 pve pveproxy[2793]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1998.
Jan 23 11:21:02 pve pveproxy[2795]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1998.
Jan 23 11:21:07 pve pveproxy[2794]: worker exit
Jan 23 11:21:07 pve pveproxy[2793]: worker exit
Jan 23 11:21:07 pve pveproxy[2795]: worker exit
Jan 23 11:21:07 pve pveproxy[2792]: worker 2795 finished

The /etc/pve/ directory is empty. Any suggestions?

Code:
root@pve:~# ls -al /etc/pve/
total 16
drwxr-xr-x   2 root root  4096 Sep 28  2018 .
drwxr-xr-x 105 root root 12288 Jan 10 22:12 ..



Edit:

Looks like the cluster config for this node is now corrupt.

Code:
root@pve:~# journalctl -b -u pve-cluster

-- Journal begins at Sun 2022-04-24 13:42:19 EDT, ends at Tue 2024-01-23 12:14:18 EST. --
Jan 23 11:56:17 pve systemd[1]: Starting The Proxmox VE cluster filesystem...
Jan 23 11:56:17 pve pmxcfs[1758]: [database] crit: select returned error: database disk image is malformed
Jan 23 11:56:17 pve pmxcfs[1758]: [database] crit: DB load failed
Jan 23 11:56:17 pve pmxcfs[1758]: [database] crit: select returned error: database disk image is malformed
Jan 23 11:56:17 pve pmxcfs[1758]: [database] crit: DB load failed
Jan 23 11:56:17 pve pmxcfs[1758]: [main] crit: memdb_open failed - unable to open database '/var/lib/pve-cluster/config.db'
Jan 23 11:56:17 pve pmxcfs[1758]: [main] notice: exit proxmox configuration filesystem (-1)
Jan 23 11:56:17 pve pmxcfs[1758]: [main] crit: memdb_open failed - unable to open database '/var/lib/pve-cluster/config.db'
Jan 23 11:56:17 pve pmxcfs[1758]: [main] notice: exit proxmox configuration filesystem (-1)
Jan 23 11:56:17 pve systemd[1]: pve-cluster.service: Control process exited, code=exited, status=255/EXCEPTION
Jan 23 11:56:17 pve systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Jan 23 11:56:17 pve systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Jan 23 11:56:17 pve systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 1.
Jan 23 11:56:17 pve systemd[1]: Stopped The Proxmox VE cluster filesystem.

Edit 2:

I've (luckily) recovered the broken node.

Backup the database
Code:
cd /var/lib/pve-cluster
cp config.db config.db.bak

Dump all current data
Code:
sqlite3 config.db

sqlite> .mode insert
sqlite> .output dump_all.sql
sqlite> .dump
sqlite> .exit

Trim any invalid transactions
Code:
cat dump_all.sql | grep -v TRANSACTION | grep -v ROLLBACK | grep -v COMMIT >dump_all_notrans.sql

Remove the corrupted database
Code:
rm config.db

Build a new database from the sanitized transactions
Code:
sqlite3 config.db ".read dump_all_notrans.sql"

Test the new new database (it might work now)
Code:
systemctl start pve-cluster.service; journalctl -fu pve-cluster

If you see anything like "duplicate entries found", delete the oldest version of the duplicate.
Example error
Code:
Jan 25 21:59:08 pve pmxcfs[155186]: [database] crit: found entry with duplicate name '128.conf' - A:(inode = 0x000000000609D98D, parent = 0x000000000000000D, v./mtime = 0x609D98F/0x1705685251) vs. B:(inode = 0x0000000006103272, parent = 0x000000000000000D, v./mtime = 0x6103274/0x1705997244)

Find the duplicates
Code:
sqlite> SELECT mtime, name FROM tree WHERE name = "128.conf" ORDER BY mtime;

Delete the oldest entry (likely the one listed first)
Code:
sqlite> DELETE FROM tree WHERE mtime = "1705685251";
exit;

Start the service and check for any other errors. Hopefully everything starts fine.
Code:
systemctl start pve-cluster.service; journalctl -fu pve-cluster

If everything is good, reboot the system.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!