Restore stuck at 100% progress : what wrong with PVE 7.3 ?

ledufakademy

Member
Sep 10, 2021
58
2
8
50
progress 100% ... then spinning and spinning in web console (just restoring for testing, a little LXC 64GB)

Edit : Reboot (PVE) then ok ... very strange things
 
Last edited:
Hi,
please share the output of pveversion -v and pct config <ID>, replacing <ID> with the ID of your container, and the task log for the restore operation.
 
Hi,
I have the same problem. Tried to restore VM from PBS. Got stuck at 100% and VM locked. Tried to restore another VM and it got stuck at 0% and the system crashed partly. Now all the VMs and datastores just show ? and WEB UI doesn't show any data that requires loading.

The latter problems was solved by:
service pve-cluster restart
service corosync restart
service pvestatd restart
service pveproxy restart
service pvedaemon restart

Everything is up to date:
pveversion -v:
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)

With QM unlock ID I'm able to remove the VMs. But still now every attempt to restore the VM get stuck in:
logical volume "vm-111-disk-1"created. Nothing happens after. Pressing stop works though.

What logs to paste and with what command?
 
Hi,
Hi,
I have the same problem. Tried to restore VM from PBS. Got stuck at 100% and VM locked. Tried to restore another VM and it got stuck at 0% and the system crashed partly. Now all the VMs and datastores just show ? and WEB UI doesn't show any data that requires loading.

The latter problems was solved by:
service pve-cluster restart
service corosync restart
service pvestatd restart
service pveproxy restart
service pvedaemon restart

Everything is up to date:
pveversion -v:
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)

With QM unlock ID I'm able to remove the VMs. But still now every attempt to restore the VM get stuck in:
logical volume "vm-111-disk-1"created. Nothing happens after. Pressing stop works though.

What logs to paste and with what command?
please share the configuration file included in the backup, the full restore task log and the full output of pveversion -v. How long did you wait for the restore operation to finish? What does pvesm status show?

Please also share the task log from the PBS side and the output of proxmox-backup-manager versions --verbose.
 
Last edited:
Now after multiple attempts, rebooting computer multiple times etc. but doing nothing differently, it worked. Weird but ok. I'll get back if problems come back.
 
I have ~similarish VM's to restore. One taken from 7.2 and the other 7.3. Machine to restore to is 7.3. First VM was restored in 10minutes. The other in 1,5hours. Both went to 100% in few minutes. The rest was waiting for the task to actually finish.
 
How big are the disks of the VMs? Did you use the same target storage? Please provide the VM's configuration files and output of pveversion -v.
 
VM disk sizes are about 100gb with 75% empty space. I tried different target storages, no difference. Last night I put 3 restores running at the same time. Web UI went completely unresponsive but during the night all three VM's were succesfully restored.

pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-5.15: 7.3-1
pve-kernel-helper: 7.3-1
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-2
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.7-pve1
 
boot: order=scsi0;ide2;net0 cores: 4 ide2: local:iso/ubuntu-20.04.4-live-server-amd64.iso,media=cdrom,size=1270M memory: 8000 meta: creation-qemu=6.2.0,ctime=1660769649 name: xxx8 net0: virtio=36:1E:FD:CC:7C:AF,bridge=vmbr0,firewall=1 numa: 0 ostype: l26 scsi0: local-zfs:vm-107-disk-0,size=100G scsihw: virtio-scsi-pci smbios1: uuid=46ca16d2-3b01-4791-a689-cb33fef4c607 sockets: 1 vmgenid: 890a0fca-ebb2-4679-854c-f9b86d268874 #qmdump#map:scsi0:drive-scsi0:local-zfs::
 
proxmox-backup-manager versions --verbose
proxmox-backup 2.3-1 running kernel: 5.15.64-1-pve
proxmox-backup-server 2.3.2-1 running version: 2.2.7
pve-kernel-5.15 7.3-1
pve-kernel-helper 7.3-1
pve-kernel-5.15.64-1-pve 5.15.64-1
pve-kernel-5.15.35-1-pve 5.15.35-3
ifupdown2 3.1.0-1+pmx3
libjs-extjs 7.0.0-1
proxmox-backup-docs 2.3.2-1
proxmox-backup-client 2.3.2-1
proxmox-mini-journalreader 1.2-1
proxmox-offline-mirror-helper 0.5.0-1
proxmox-widget-toolkit 3.5.3
pve-xtermjs 4.16.0-1
smartmontools 7.2-pve3
zfsutils-linux 2.1.7-pve2
 
So in conclusion: It just takes hours after getting to 100% to actually get the task finished and during that the host is quite heavily loaded. Dunno if it is a bug or not but I can live with this. Can also provide more info if needed.

The host is HPE Proliant with 400gb or RAM, all SSD and 64 x Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz (4 Sockets).
 
Last edited:
EDIT: Just noticed that also move disk operation renders the host unusable. I guess I need to start reading into mount options. My other proxmox hosts with hdd raids have no issues but this one with SSD cant seem to handle these operations.
 
EDIT: Just noticed that also move disk operation renders the host unusable. I guess I need to start reading into mount options. My other proxmox hosts with hdd raids have no issues but this one with SSD cant seem to handle these operations.

What is the exact model number of the SSD respectively those SSDs each?

Edit: And what ZFS raid mode you are using?
 
Last edited:
I have the same problem since 7.x: on 3 different HPE Prolinet with SSD - lvm-thin. No stuck when using an SSD-directory directly, without lvm-thin. I have noted this many times.
 
I have noticed this problem with HP-SSDs as well as other server-SSDs. the same problem when used with LVM (normal). support says: is a problem with backup-battery, but this wasn't. I have on all HP-Servers battery backup online. As I said, with formating the disk e.g. with xfs and mount it in a directory, this problem does not occure. Also no problem with moving.
 
There goes the SSD idea. Did a backup task on another computer. HDD. Took 3 days and 16 hours. 1000gb with most of it empty. Some backups randomly take 10 minutes.

()

Task viewer: Datastore tallipbs15 Backup vm/154

OutputStatus

Stop
2023-01-15T13:43:32+02:00: starting new backup on datastore 'tallipbs15': "vm/154/2023-01-15T11:43:28Z"
2023-01-15T13:43:32+02:00: GET /previous: 400 Bad Request: no valid previous backup
2023-01-15T13:43:32+02:00: created new fixed index 1 ("vm/154/2023-01-15T11:43:28Z/drive-scsi0.img.fidx")
2023-01-15T13:43:32+02:00: add blob "/mnt/datastore/tallipbs15/vm/154/2023-01-15T11:43:28Z/qemu-server.conf.blob" (389 bytes, comp: 389)
2023-01-19T05:56:51+02:00: Upload statistics for 'drive-scsi0.img.fidx'
2023-01-19T05:56:51+02:00: UUID: 70ee4242687348b2b8b476c4deb3b9c1
2023-01-19T05:56:51+02:00: Checksum: d79969b71aea093389ce93e0cc4e09b4844a0c94767a14307891dfe4cb5e4e02
2023-01-19T05:56:51+02:00: Size: 1073741824000
2023-01-19T05:56:51+02:00: Chunk count: 256000
2023-01-19T05:56:51+02:00: Upload size: 880925474816 (82%)
2023-01-19T05:56:51+02:00: Duplicates: 45971+236 (18%)
2023-01-19T05:56:51+02:00: Compression: 63%
2023-01-19T05:56:51+02:00: successfully closed fixed index 1
2023-01-19T05:56:51+02:00: add blob "/mnt/datastore/tallipbs15/vm/154/2023-01-15T11:43:28Z/index.json.blob" (328 bytes, comp: 328)
2023-01-19T05:56:54+02:00: successfully finished backup
2023-01-19T05:56:54+02:00: backup finished successfully
2023-01-19T05:56:54+02:00: TASK OK
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!