backups - exit code 11

moofone

New Member
Apr 17, 2020
8
0
1
49
Hi,

Have been trying to get backups working over the last 2 days without luck. I've had it working in the past but havent been able to compose a repeatable process especially on the latest beta (0.9.0).

I've installed the backup server on top of debian 10.6.0 running in a freenas vm (to avoid running it as a vm within proxmox as recommended).

I have two proxmox servers (both 6.2-12) which I'm trying to connect to proxback backup service, but when I run the backup of a vm, I get this every time (exit code 11). Its the same whether the backup server is installed in freenas as a vm, or as a proxmox vm with an entirely different datastore.

Please tell me what additional debugging info I should share

To create it:
#proxmox-backup-manager disk initialize vdb
#proxmox-backup-manager disk fs create ice-store --disk vdb --filesystem ext4 --add-datastore true
#proxmox-backup-manager disk smart-attributes vdb

If I try a backup of the same vm to a local store (not using the proxmox backup) it works okay:

()
INFO: starting new backup job: vzdump 201 --mode snapshot --storage backup-server --node watlab --remove 0
INFO: Starting Backup of VM 201 (lxc)
INFO: Backup started at 2020-10-12 17:34:00
INFO: status = running
INFO: CT Name: ice-dev
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/mnt/icedev') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: ice-dev
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/mnt/icedev') in backup
INFO: starting first sync /proc/37336/root/ to /var/tmp/vzdumptmp36060
ERROR: Backup of VM 201 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --sparse --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/37336/root//./ /proc/37336/root//./mnt/icedev /var/tmp/vzdumptmp36060' failed: exit code 11
INFO: Failed at 2020-10-12 17:34:37
INFO: Backup job finished with errors
TASK ERROR: job errors

root@pbs:~# proxmox-backup-manager disk list
┌──────┬─────────┬─────┬───────────┬────────────────┬───────┬─────────┬─────────┐
│ name │ used │ gpt │ disk-type │ size │ model │ wearout │ status │
╞══════╪═════════╪═════╪═══════════╪════════════════╪═══════╪═════════╪═════════╡
│ vda │ mounted │ 1 │ hdd │ 17179869184 │ │ - │ unknown │
├──────┼─────────┼─────┼───────────┼────────────────┼───────┼─────────┼─────────┤
│ vdb │ mounted │ 1 │ hdd │ 10995116310528 │ │ - │ unknown │
└──────┴─────────┴─────┴───────────┴────────────────┴───────┴─────────┴─────────┘

root@pbs:~# proxmox-backup-manager datastore show ice-store
┌──────┬──────────────────────────┐
│ Name │ Value │
╞══════╪══════════════════════════╡
│ name │ ice-store │
├──────┼──────────────────────────┤
│ path │ /mnt/datastore/ice-store │
└──────┴──────────────────────────┘


Edit: I was able to create a simple LXC and back it up with only one local root disk, so not sure why the "icedev" mount point isnt working. Maybe something related to not supported snapshots and having to switch to suspend?
 
Last edited:
Hi,
does using stop mode for the backup work instead? What kind of storage does the mount point live on? Could you share the container and relevant storage config?
 
  • Like
Reactions: Eiserfeyer
Thank you for the reply.

Stop mode does work:
INFO: starting new backup job: vzdump 201 --remove 0 --mode stop --storage prox-backup-server --node watlab
INFO: Starting Backup of VM 201 (lxc)
INFO: Backup started at 2020-10-13 08:55:57
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: ice-dev
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/mnt/icedev') in backup
INFO: stopping vm
INFO: creating Proxmox Backup Server archive 'ct/201/2020-10-13T11:55:57Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=encrypt --keyfd=14 pct.conf:/var/tmp/vzdumptmp40620/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --include-dev /mnt/vzsnap0/./mnt/icedev --skip-lost-and-found --backup-type ct --backup-id 201 --backup-time 1602590157 --repository root@pam@192.168.2.206:ice-store
INFO: Starting backup: ct/201/2020-10-13T11:55:57Z
INFO: Client name: watlab
INFO: Starting backup protocol: Tue Oct 13 08:56:01 2020
INFO: Upload config file '/var/tmp/vzdumptmp40620/etc/vzdump/pct.conf' to 'root@pam@192.168.2.206:8007:ice-store' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.2.206:8007:ice-store' as root.pxar.didx
INFO: root.pxar: had to upload 8.55 GiB of 9.11 GiB in 71.49s, average speed 122.44 MiB/s).
INFO: root.pxar: backup was done incrementally, reused 579.88 MiB (6.2%)
INFO: Uploaded backup catalog (4.72 MiB)
INFO: Duration: 71.53s
INFO: End Time: Tue Oct 13 08:57:12 2020
INFO: restarting vm
INFO: guest is online again after 79 seconds
INFO: Finished Backup of VM 201 (00:01:19)
INFO: Backup finished at 2020-10-13 08:57:16
INFO: Backup job finished successfully
TASK OK


I tried snapshot again (which uses suspend because its not a thin vol. It fails as expected)

I am able to back up if I target its own storage (icedev). Please let me know if I can provide more info.

Not sure if this is what you are asking for:
root@watlab:/var/lib/lxc/201# cat config
lxc.cgroup.relative = 0
lxc.cgroup.dir.monitor = lxc.monitor/201
lxc.cgroup.dir.container = lxc/201
lxc.cgroup.dir.container.inner = ns
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
lxc.seccomp.profile = /usr/share/lxc/config/pve-userns.seccomp
lxc.apparmor.profile = generated
lxc.apparmor.raw = deny mount -> /proc/,
lxc.apparmor.raw = deny mount -> /sys/,
lxc.mount.auto = sys:mixed
lxc.monitor.unshare = 1
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
lxc.tty.max = 2
lxc.environment = TERM=linux
lxc.uts.name = ice-dev
lxc.cgroup.memory.limit_in_bytes = 68719476736
lxc.cgroup.memory.memsw.limit_in_bytes = 69256347648
lxc.cgroup.cpu.shares = 1024
lxc.rootfs.path = /var/lib/lxc/201/rootfs
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth201i0
lxc.net.0.hwaddr = 92:90:50:86:C7:EC
lxc.net.0.name = eth0
lxc.net.0.script.up = /usr/share/lxc/lxcnetaddbr
lxc.cgroup.cpuset.cpus = 0-31
 
Last edited:
I re-imaged the server not using LVM (only LVM-thin) and the problem has gone away.
 
Glad that you were able to find a workaround. Could you please mark the thread as [SOLVED] so others know what to expect?

That said, I'd still like to investigate a bit. Could you post the output of:
Code:
pct config 201
Both the rootfs and the mount point were on the non-thin LVM storage previously? What kind of system/service was running on the container?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!