CT throwing error

300cpilot

Well-Known Member
Mar 24, 2019
108
5
58
I am getting the following error on a single CT. Just switched to Backup Server. Everything else seams ok so far.


INFO: Starting Backup of VM 706 (lxc)
INFO: Backup started at 2023-05-29 00:16:11
INFO: status = running
INFO: CT Name: node006
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
Creating snap: 0% complete...failed.
snapshot create failed: starting cleanup
no lock found trying to remove 'backup' lock
ERROR: Backup of VM 706 failed - rbd snapshot 'vm-706-disk-0' error: Creating snap: 0% complete...failed.
INFO: Failed at 2023-05-29 00:16:12

pct list shows no locks.

Thanks,
 
the snapshot creation fails, what kind of storage is the container on? how does the config look like?
 
storage is a ceph storage across 18 drives on 3 nodes, it is stating it is healthy.

arch: amd64
cores: 4
features: nesting=1
hostname: node006
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.90.1.1,gw6=2001:470:b:6d1::1,hwaddr=62:D1:71:57:E1:A3,ip=10.90.1.16/24,ip6=2001:::/64,tag=50,type==veth>
ostype: ubuntu
parent: vzdump
rootfs: Main:vm-706-disk-0,size=100G
swap: 4096
unprivileged: 1
 
The backup does seam to drag this thing down to its knees now.
Also manual creation of snapshot does work.
 
i'd check the ceph logs, maybe they can tell why the snapshot creation fails at that point
 
These logs are pretty big. in the ceph.log there are thousands of the active+clean, but I do not see anything labeled as an error. I started a backup, had it fail and then went through the log. The log its self is concerning that it just cycles over and over. What am I actually looking for?

2023-06-01T05:42:28.794011-0600 mgr.Node-A (mgr.27406561) 14330 : cluster [DBG] pgmap v14354: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 2.1 MiB/s rd, 203 KiB/s wr, 33 op/s
2023-06-01T05:42:30.795222-0600 mgr.Node-A (mgr.27406561) 14331 : cluster [DBG] pgmap v14355: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 2.1 MiB/s rd, 551 KiB/s wr, 42 op/s
2023-06-01T05:42:32.795687-0600 mgr.Node-A (mgr.27406561) 14332 : cluster [DBG] pgmap v14356: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 2.1 MiB/s rd, 534 KiB/s wr, 38 op/s
2023-06-01T05:42:34.796492-0600 mgr.Node-A (mgr.27406561) 14333 : cluster [DBG] pgmap v14357: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 2.1 MiB/s rd, 555 KiB/s wr, 41 op/s
2023-06-01T05:42:36.797655-0600 mgr.Node-A (mgr.27406561) 14334 : cluster [DBG] pgmap v14358: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 1.4 MiB/s rd, 575 KiB/s wr, 42 op/s
2023-06-01T05:42:38.798371-0600 mgr.Node-A (mgr.27406561) 14335 : cluster [DBG] pgmap v14359: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 490 KiB/s wr, 28 op/s
2023-06-01T05:42:40.799731-0600 mgr.Node-A (mgr.27406561) 14336 : cluster [DBG] pgmap v14360: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 477 KiB/s wr, 32 op/s
2023-06-01T05:42:42.800447-0600 mgr.Node-A (mgr.27406561) 14337 : cluster [DBG] pgmap v14361: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 124 KiB/s wr, 22 op/s
2023-06-01T05:42:44.801287-0600 mgr.Node-A (mgr.27406561) 14338 : cluster [DBG] pgmap v14362: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 341 KiB/s rd, 125 KiB/s wr, 26 op/s
2023-06-01T05:42:46.802543-0600 mgr.Node-A (mgr.27406561) 14339 : cluster [DBG] pgmap v14363: 513 pgs: 1 active+clean+scrubbing+deep, 512 active+clean; 3.9 Ti
B data, 7.8 TiB used, 5.8 TiB / 14 TiB avail; 2.1 MiB/s rd, 112 KiB/s wr, 39 op/s
2023-06-01T05:42:47.358765-0600 osd.19 (osd.19) 67 : cluster [DBG] 2.d1 deep-scrub ok
 
I was able to finally clone the ct and the clone is able to be backed up. I then deleted the problem ct.
I guess I'm good until the next time. Thanks for your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!