proxmox unstable when performing restore

gergogyerek

New Member
Jul 7, 2020
5
0
1
45
Dear Community,
I tried to restore a backup (container) and it always fails and the whole proxmox interface gets unresponsive.
Could you please suggest which logs should I check?
thanks
Gergo
 

Attachments

  • proxmox_unresponsive.PNG
    proxmox_unresponsive.PNG
    41.1 KB · Views: 12
Hello,

please post output of pveversion -v and container config that restored as well

the syslog, journalctl and dmesg are always helpful ;)
 
hi,
please find below the requested info.


Bash:
root@proxmox:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-12
pve-cluster: 6.1-8
pve-container: 3.2-1
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-1
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
root@proxmox:~#

and apologies, it is not a container, it is a vm. sorry.
here is the config

Bash:
root@proxmox:/etc/pve/qemu-server# cat 103.conf
agent: 1
bios: ovmf
bootdisk: sata0
efidisk0: local-lvm:vm-103-disk-0,size=4M
name: hassOS-4.11
net0: virtio=DE:51:4B:22:B2:A1,bridge=vmbr0
onboot: 1
ostype: l26
sata0: local-lvm:vm-103-disk-1,size=6G
scsihw: virtio-scsi-pci
smbios1: uuid=11dc55cf-7076-451f-a99b-a9022ea5d800
vmgenid: 266cefd6-7152-46eb-88e0-10248fc72841
root@proxmox:/etc/pve/qemu-server#
 
Bash:
restore vma archive: zstd -q -d -c /mnt/picture/dump/vzdump-qemu-103-2020_09_05-02_33_16.vma.zst | vma extract -v -r /var/tmp/vzdumptmp51579.fifo - /var/tmp/vzdumptmp51579
CFG: size: 418 name: qemu-server.conf
DEV: dev_id=1 size: 131072 devname: drive-efidisk0
DEV: dev_id=2 size: 6442450944 devname: drive-sata0
CTIME: Sat Sep  5 02:33:17 2020
  Logical volume "vm-103-disk-1" successfully removed
  Logical volume "vm-103-disk-0" successfully removed
  Rounding up size to full physical extent 4.00 MiB
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "vm-103-disk-0" created.
  WARNING: Sum of all thin volume sizes (152.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (16.00 GiB).
new volume ID is 'local-lvm:vm-103-disk-0'
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "vm-103-disk-1" created.
  WARNING: Sum of all thin volume sizes (158.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (16.00 GiB).
new volume ID is 'local-lvm:vm-103-disk-1'
map 'drive-efidisk0' to '/dev/pve/vm-103-disk-0' (write zeros = 0)
map 'drive-sata0' to '/dev/pve/vm-103-disk-1' (write zeros = 0)
progress 1% (read 64487424 bytes, duration 1 sec)
progress 2% (read 128909312 bytes, duration 1 sec)
progress 3% (read 193331200 bytes, duration 1 sec)
progress 4% (read 257753088 bytes, duration 1 sec)
progress 5% (read 322174976 bytes, duration 1 sec)
progress 6% (read 386596864 bytes, duration 1 sec)
progress 7% (read 451018752 bytes, duration 2 sec)
progress 8% (read 515440640 bytes, duration 2 sec)
progress 9% (read 579862528 bytes, duration 2 sec)
progress 10% (read 644284416 bytes, duration 2 sec)
progress 11% (read 708706304 bytes, duration 2 sec)
progress 12% (read 773128192 bytes, duration 2 sec)
progress 13% (read 837550080 bytes, duration 2 sec)
progress 14% (read 901971968 bytes, duration 2 sec)
progress 15% (read 966393856 bytes, duration 3 sec)
progress 16% (read 1030815744 bytes, duration 4 sec)
progress 17% (read 1095303168 bytes, duration 4 sec)
progress 18% (read 1159725056 bytes, duration 4 sec)
progress 19% (read 1224146944 bytes, duration 5 sec)
progress 20% (read 1288568832 bytes, duration 5 sec)
progress 21% (read 1352990720 bytes, duration 5 sec)
progress 22% (read 1417412608 bytes, duration 6 sec)
progress 23% (read 1481834496 bytes, duration 6 sec)
progress 24% (read 1546256384 bytes, duration 6 sec)
progress 25% (read 1610678272 bytes, duration 7 sec)
progress 26% (read 1675100160 bytes, duration 7 sec)
progress 27% (read 1739522048 bytes, duration 7 sec)
progress 28% (read 1803943936 bytes, duration 7 sec)
progress 29% (read 1868365824 bytes, duration 8 sec)
progress 30% (read 1932787712 bytes, duration 8 sec)
progress 31% (read 1997209600 bytes, duration 9 sec)
progress 32% (read 2061631488 bytes, duration 9 sec)
progress 33% (read 2126053376 bytes, duration 9 sec)
progress 34% (read 2190540800 bytes, duration 10 sec)
progress 35% (read 2254962688 bytes, duration 10 sec)
progress 36% (read 2319384576 bytes, duration 10 sec)
progress 37% (read 2383806464 bytes, duration 10 sec)
progress 38% (read 2448228352 bytes, duration 11 sec)
progress 39% (read 2512650240 bytes, duration 11 sec)
progress 40% (read 2577072128 bytes, duration 11 sec)
progress 41% (read 2641494016 bytes, duration 11 sec)
progress 42% (read 2705915904 bytes, duration 12 sec)
progress 43% (read 2770337792 bytes, duration 12 sec)
progress 44% (read 2834759680 bytes, duration 12 sec)
progress 45% (read 2899181568 bytes, duration 13 sec)
progress 46% (read 2963603456 bytes, duration 13 sec)
progress 47% (read 3028025344 bytes, duration 13 sec)
progress 48% (read 3092447232 bytes, duration 14 sec)
progress 49% (read 3156869120 bytes, duration 14 sec)
progress 50% (read 3221291008 bytes, duration 15 sec)
progress 51% (read 3285778432 bytes, duration 16 sec)
progress 52% (read 3350200320 bytes, duration 18 sec)
progress 53% (read 3414622208 bytes, duration 20 sec)
progress 54% (read 3479044096 bytes, duration 22 sec)
progress 55% (read 3543465984 bytes, duration 24 sec)
progress 56% (read 3607887872 bytes, duration 27 sec)
 
Hi again,

thank you for output :)

- did you upgrade your PVE node without reboot? (if yes - please try reboot your PVE)

- could you please post output of cat /etc/pve/storage.cfg

- do you have enough space in your storage?
 
Honestly I do not remember if I rebooted.. let me do it now.


Bash:
root@proxmox:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup,rootdir
        maxfiles 2
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: picture
        path /mnt/picture
        content backup,images,vztmpl,rootdir,iso,snippets
        maxfiles 5
        shared 0

root@proxmox:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!