TASK ERROR: connection error: not connected

devilkin

Member
May 11, 2020
26
3
23
I recently switched over a CT from using the default change detection method to the new metadata one. Works a ton faster, but since then I see the following in PBS in the task log in PBS:

Code:
2024-11-29T13:13:08+01:00: starting new backup reader datastore 'PBS-PVE': "/mnt/pbs"
2024-11-29T13:13:08+01:00: protocol upgrade done
2024-11-29T13:13:08+01:00: GET /download
2024-11-29T13:13:08+01:00: download "/mnt/pbs/ct/121/2024-11-28T12:49:29Z/root.mpxar.didx"
2024-11-29T13:13:08+01:00: register chunks in 'root.mpxar.didx' as downloadable.
2024-11-29T13:13:08+01:00: GET /chunk
2024-11-29T13:13:08+01:00: download chunk "/mnt/pbs/.chunks/d600/d600e37b5af402309a7f1561ee93f341df1b563533fa1c1767e46a2010b4b280"
2024-11-29T13:13:08+01:00: GET /chunk
2024-11-29T13:13:08+01:00: download chunk "/mnt/pbs/.chunks/2b17/2b1708baa5f26c9954c33dfad30d5cfbeb86d81556b13c42a549a69dccb1c613"
2024-11-29T13:13:09+01:00: GET /chunk
2024-11-29T13:13:09+01:00: download chunk "/mnt/pbs/.chunks/c415/c4152af9a283d32cf430d01801d7fbcb8db1be1b7463e7f6432235a23bf15578"
2024-11-29T13:13:09+01:00: GET /chunk
2024-11-29T13:13:09+01:00: download chunk "/mnt/pbs/.chunks/0d6d/0d6d87309f58d95fe3313ab75db40896f92c4f86549e796cb15def719d261898"
2024-11-29T13:13:09+01:00: GET /chunk
2024-11-29T13:13:09+01:00: download chunk "/mnt/pbs/.chunks/6c2a/6c2af50ea5c810e770c3d891d4180c2514e8c882c5eca459b1de568febeb433f"
2024-11-29T13:13:14+01:00: GET /chunk
2024-11-29T13:13:14+01:00: download chunk "/mnt/pbs/.chunks/079b/079b96eb65fd66cefaf782bd1e1b90bbceee308988387ad38cedb9e56c39aaea"
2024-11-29T13:13:25+01:00: GET /chunk
2024-11-29T13:13:25+01:00: download chunk "/mnt/pbs/.chunks/fcf5/fcf5ea3cf3f23083364a35893d5c74f7ca2223bba088e580be71e5ff18e550c6"
2024-11-29T13:13:30+01:00: GET /chunk
2024-11-29T13:13:30+01:00: download chunk "/mnt/pbs/.chunks/4dbf/4dbf21e9b5d06e49218f4cd2fa28dbc57f2272f9a756d3d50f4e6ecdb7354e1d"
2024-11-29T13:13:33+01:00: GET /chunk
2024-11-29T13:13:33+01:00: download chunk "/mnt/pbs/.chunks/6faf/6fafc0bc9c7e4d2e80d76f18b4e1ebbc610d605c0a45e98b3c223ef626c590d4"
2024-11-29T13:13:34+01:00: GET /chunk
2024-11-29T13:13:34+01:00: download chunk "/mnt/pbs/.chunks/8dbd/8dbdc54a0e8e62606d4fa87843d71c3f6ec1a5b6d7a574602b5837a25746cd32"
2024-11-29T13:13:40+01:00: GET /chunk
2024-11-29T13:13:40+01:00: download chunk "/mnt/pbs/.chunks/c125/c1250d277a76c79d52aa2f68007e62cf12c278bf086ad8375eaf0de9d0c6c215"
2024-11-29T13:13:40+01:00: GET /chunk
2024-11-29T13:13:40+01:00: download chunk "/mnt/pbs/.chunks/d1b7/d1b77ce3afc3dc90bad8b812b7f03c24fc9437520f3eb715f76a226ff630cb31"
2024-11-29T13:13:41+01:00: GET /chunk
2024-11-29T13:13:41+01:00: download chunk "/mnt/pbs/.chunks/cbbf/cbbf473c6639b50bacaa158b75e51f8b9bdd92fc7ae44220fac273bab01c3ca2"
2024-11-29T13:13:55+01:00: GET /chunk
2024-11-29T13:13:55+01:00: download chunk "/mnt/pbs/.chunks/2ff1/2ff11f1fd1432cb949b3886647cf798c46d8e0f0bdafdb41b341e8ef6c05c474"
2024-11-29T13:13:56+01:00: GET /chunk
2024-11-29T13:13:56+01:00: download chunk "/mnt/pbs/.chunks/876c/876c34c4fa13265b0997f6492d6c4af6237074b618a14c7a021e165c64d9bb8e"
2024-11-29T13:13:57+01:00: GET /chunk
2024-11-29T13:13:57+01:00: download chunk "/mnt/pbs/.chunks/65cf/65cfbf03825c990cec3f3dbf10aab6f180c7e945b32c77166f6f7d2a722158a4"
2024-11-29T13:13:59+01:00: GET /chunk
2024-11-29T13:13:59+01:00: download chunk "/mnt/pbs/.chunks/1f53/1f533567794997eb8cc9dad24bc6465dd0e5beef68ae5b8961c1696ae1d9477d"
2024-11-29T13:14:21+01:00: GET /chunk
2024-11-29T13:14:21+01:00: download chunk "/mnt/pbs/.chunks/13d3/13d39a28309cdc80dff28da09cb855fefe3ccc0ff45a7e69f00d72855d6cc90d"
2024-11-29T13:14:40+01:00: TASK ERROR: connection error: not connected

The task log in PVE shows that it went ok
Code:
INFO: Starting Backup of VM 121 (lxc)
INFO: Backup started at 2024-11-29 13:13:06
INFO: status = running
INFO: CT Name: nextcloud.home.lan
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/srv/nextcloud') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
INFO: resume vm
INFO: guest is online again after 1 seconds
INFO: creating Proxmox Backup Server archive 'ct/121/2024-11-29T12:13:06Z'
INFO: set max number of entries in memory for file-based backups to 1048576
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp892706_121/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --include-dev /mnt/vzsnap0/./srv/nextcloud --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 121 --backup-time 1732882386 --change-detection-mode metadata --entries-max 1048576 --repository root@pam@pbs.home.lan:PBS-PVE
INFO: Starting backup: ct/121/2024-11-29T12:13:06Z   
INFO: Client name: cube   
INFO: Starting backup protocol: Fri Nov 29 13:13:07 2024   
INFO: Downloading previous manifest (Thu Nov 28 13:49:29 2024)   
INFO: Upload config file '/var/tmp/vzdumptmp892706_121/etc/vzdump/pct.conf' to 'root@pam@pbs.home.lan:8007:PBS-PVE' as pct.conf.blob   
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@pbs.home.lan:8007:PBS-PVE' as root.mpxar.didx   
INFO: Using previous index as metadata reference for 'root.mpxar.didx'   
INFO: processed 294.957 GiB in 1m, uploaded 21.963 MiB   
INFO: Change detection summary:   
INFO:  - 250625 total files (8 hardlinks)   
INFO:  - 241201 unchanged, reusable files with 296.562 GiB data   
INFO:  - 9416 changed or non-reusable files with 642.224 MiB data   
INFO:  - 57.098 MiB padding in 62 partially reused chunks   
INFO: root.ppxar: reused 296.617 GiB from previous snapshot for unchanged files (91997 chunks)   
INFO: root.ppxar: had to backup 359.398 MiB of 297.244 GiB (compressed 100.559 MiB) in 91.69 s (average 3.92 MiB/s)   
INFO: root.ppxar: backup was done incrementally, reused 296.894 GiB (99.9%)   
INFO: root.mpxar: had to backup 68.079 MiB of 68.079 MiB (compressed 10.163 MiB) in 91.79 s (average 759.447 KiB/s)   
INFO: Duration: 305.85s   
INFO: End Time: Fri Nov 29 13:18:13 2024   
INFO: cleanup temporary 'vzdump' snapshot
INFO: Finished Backup of VM 121 (00:05:09)
INFO: Backup finished at 2024-11-29 13:18:15
INFO: Backup job finished successfully

Is this something I need to worry about?
 
Hi,
nothing to worry about!

When using the change detection mode metadata, on the PBS side you will have 2 running tasks, one for reading the previous snapshot for metadata comparison, and one for the actual backup writer instance. Above you show the task log for the reader of the previous snapshots metadata archive, it is not the backup writer task. Check the backup writer task for errors (that should however be fine, as the backup job went through successfully).

I think this might be because the reader instance used to fetch the previous snapshots metadata archive was disconnected after the backup run finished, without closing the connection gracefully.

Will look into this, the connection for the reader must be closed gracefully before finishing the task to avoid this error which might cause confusion.

Thanks a lot for the report!
 
  • Like
Reactions: devilkin
Thank you! Is there anywhere I can keep track to see if they're available in a package? I don't feel like manually patching my system ;)
 
You can keep track of the development on the mailing list, or alternatively open an issue at bugzilla.proxmox.com, referencing this thread so you are subscribed to that issue and get status notifications.
 
  • Like
Reactions: devilkin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!