progress 100% ... then spinning and spinning in web console (just restoring for testing, a little LXC 64GB)
Edit : Reboot (PVE) then ok ... very strange things
Edit : Reboot (PVE) then ok ... very strange things
Last edited:
please share the configuration file included in the backup, the full restore task log and the full output ofHi,
I have the same problem. Tried to restore VM from PBS. Got stuck at 100% and VM locked. Tried to restore another VM and it got stuck at 0% and the system crashed partly. Now all the VMs and datastores just show ? and WEB UI doesn't show any data that requires loading.
The latter problems was solved by:
service pve-cluster restart
service corosync restart
service pvestatd restart
service pveproxy restart
service pvedaemon restart
Everything is up to date:
pveversion -v:
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
With QM unlock ID I'm able to remove the VMs. But still now every attempt to restore the VM get stuck in:
logical volume "vm-111-disk-1"created. Nothing happens after. Pressing stop works though.
What logs to paste and with what command?
pveversion -v
. How long did you wait for the restore operation to finish? What does pvesm status
show?proxmox-backup-manager versions --verbose
.pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-5.15: 7.3-1
pve-kernel-helper: 7.3-1
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-2
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.7-pve1
boot: order=scsi0;ide2;net0 cores: 4 ide2: local:iso/ubuntu-20.04.4-live-server-amd64.iso,media=cdrom,size=1270M memory: 8000 meta: creation-qemu=6.2.0,ctime=1660769649 name: xxx8 net0: virtio=36:1E:FD:CC:7C:AF,bridge=vmbr0,firewall=1 numa: 0 ostype: l26 scsi0: local-zfs:vm-107-disk-0,size=100G scsihw: virtio-scsi-pci smbios1: uuid=46ca16d2-3b01-4791-a689-cb33fef4c607 sockets: 1 vmgenid: 890a0fca-ebb2-4679-854c-f9b86d268874 #qmdump#map:scsi0:drive-scsi0:local-zfs::
proxmox-backup-manager versions --verbose
proxmox-backup 2.3-1 running kernel: 5.15.64-1-pve
proxmox-backup-server 2.3.2-1 running version: 2.2.7
pve-kernel-5.15 7.3-1
pve-kernel-helper 7.3-1
pve-kernel-5.15.64-1-pve 5.15.64-1
pve-kernel-5.15.35-1-pve 5.15.35-3
ifupdown2 3.1.0-1+pmx3
libjs-extjs 7.0.0-1
proxmox-backup-docs 2.3.2-1
proxmox-backup-client 2.3.2-1
proxmox-mini-journalreader 1.2-1
proxmox-offline-mirror-helper 0.5.0-1
proxmox-widget-toolkit 3.5.3
pve-xtermjs 4.16.0-1
smartmontools 7.2-pve3
zfsutils-linux 2.1.7-pve2
EDIT: Just noticed that also move disk operation renders the host unusable. I guess I need to start reading into mount options. My other proxmox hosts with hdd raids have no issues but this one with SSD cant seem to handle these operations.
()
Task viewer: Datastore tallipbs15 Backup vm/154
OutputStatus
Stop
2023-01-15T13:43:32+02:00: starting new backup on datastore 'tallipbs15': "vm/154/2023-01-15T11:43:28Z"
2023-01-15T13:43:32+02:00: GET /previous: 400 Bad Request: no valid previous backup
2023-01-15T13:43:32+02:00: created new fixed index 1 ("vm/154/2023-01-15T11:43:28Z/drive-scsi0.img.fidx")
2023-01-15T13:43:32+02:00: add blob "/mnt/datastore/tallipbs15/vm/154/2023-01-15T11:43:28Z/qemu-server.conf.blob" (389 bytes, comp: 389)
2023-01-19T05:56:51+02:00: Upload statistics for 'drive-scsi0.img.fidx'
2023-01-19T05:56:51+02:00: UUID: 70ee4242687348b2b8b476c4deb3b9c1
2023-01-19T05:56:51+02:00: Checksum: d79969b71aea093389ce93e0cc4e09b4844a0c94767a14307891dfe4cb5e4e02
2023-01-19T05:56:51+02:00: Size: 1073741824000
2023-01-19T05:56:51+02:00: Chunk count: 256000
2023-01-19T05:56:51+02:00: Upload size: 880925474816 (82%)
2023-01-19T05:56:51+02:00: Duplicates: 45971+236 (18%)
2023-01-19T05:56:51+02:00: Compression: 63%
2023-01-19T05:56:51+02:00: successfully closed fixed index 1
2023-01-19T05:56:51+02:00: add blob "/mnt/datastore/tallipbs15/vm/154/2023-01-15T11:43:28Z/index.json.blob" (328 bytes, comp: 328)
2023-01-19T05:56:54+02:00: successfully finished backup
2023-01-19T05:56:54+02:00: backup finished successfully
2023-01-19T05:56:54+02:00: TASK OK