Cannot restore backups in Proxmox 4.2-2

Ernie Dunbar

New Member
Nov 15, 2017
4
1
3
48
For some reason, lately I've been unable to restore backups in Proxmox. It doesn't seem to be specific to a particular backup, and I've had the same problem with restoring any of them. I have plenty of space on the local system for the tmp directory - for example, the most recent backup I've tried is 9gb, and the Proxmox server I'm restoring to has 88 GB free in the root partition.

The Proxmox console produces this error:

TASK ERROR: command 'lzop -d -c /OSISOs/dump/vzdump-qemu-300-2018_01_21-00_59_50.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp10495.fifo - /var/tmp/vzdumptmp10495' failed: unable to create image: got lock timeout - aborting command

When I try this on the command line, I get the following output:

# lzop -d -c /OSISOs/dump/vzdump-qemu-300-2018_01_21-00_59_50.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp10495.fifo - /var/tmp/vzdumptmp10495
CFG: size: 372 name: qemu-server.conf
DEV: dev_id=1 size: 68719476736 devname: drive-ide0
CTIME: Sun Jan 21 00:59:51 2018

** (process:11575): ERROR **: unable to open fifo /var/tmp/vzdumptmp10495.fifo - No such file or directory
Trace/breakpoint trap

This doesn't make any sense, because the directory /var/tmp/vzdumptmp10495 does indeed exist. If I remove it and try to start again, I get the same error.

This isn't a huge emergency just yet, but the next time I have to restore something important, it will be.
 
I think I see the problem here. It looks like an update to `lzop` has deprecated the '-r' switch. It's certainly not in the man page. Removing that switch and option from the command line allowed lzop to restore the file in question.

Which is odd, because this works fine in a completely different Proxmox server (also v 4.2-2), and the version of lzop is the same - v1.03-3.
 
Last edited:
  • Like
Reactions: NewDude
It turns out that the real error was near the top of the output:


[2018-03-09 21:22:42.773153] I [dht-shared.c:311:dht_init_regex] 0-gv4-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2018-03-09 21:22:42.775224] I [socket.c:3578:socket_init] 0-gv4-client-2: SSL support is NOT enabled
[2018-03-09 21:22:42.775244] I [socket.c:3593:socket_init] 0-gv4-client-2: using system polling thread
[2018-03-09 21:22:42.775544] I [socket.c:3578:socket_init] 0-gv4-client-1: SSL support is NOT enabled
[2018-03-09 21:22:42.775553] I [socket.c:3593:socket_init] 0-gv4-client-1: using system polling thread
[2018-03-09 21:22:42.775845] I [socket.c:3578:socket_init] 0-gv4-client-0: SSL support is NOT enabled
[2018-03-09 21:22:42.775853] I [socket.c:3593:socket_init] 0-gv4-client-0: using system polling thread
[2018-03-09 21:22:42.775875] I [glfs-master.c:93:notify] 0-gfapi: New graph 636c6f75-6432-2d31-3630-37332d323031 (0) coming up
[2018-03-09 21:22:42.775895] I [client.c:2294:notify] 0-gv4-client-0: parent translators are ready, attempting connect on transport
[2018-03-09 21:22:42.776576] I [client.c:2294:notify] 0-gv4-client-1: parent translators are ready, attempting connect on transport
[2018-03-09 21:22:42.776810] I [client.c:2294:notify] 0-gv4-client-2: parent translators are ready, attempting connect on transport
[2018-03-09 21:22:42.777968] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-gv4-client-0: changing port to 49153 (from 0)
[2018-03-09 21:22:42.778017] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-gv4-client-2: changing port to 49153 (from 0)
[2018-03-09 21:22:42.779415] I [client-handshake.c:1677:select_server_supported_programs] 0-gv4-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2018-03-09 21:22:42.779556] I [client-handshake.c:1677:select_server_supported_programs] 0-gv4-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2018-03-09 21:22:42.779896] I [client-handshake.c:1462:client_setvolume_cbk] 0-gv4-client-0: Connected to 206.12.82.111:49153, attached to remote volume '/brick2/gv4'.
[2018-03-09 21:22:42.779910] I [client-handshake.c:1474:client_setvolume_cbk] 0-gv4-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2018-03-09 21:22:42.779968] I [afr-common.c:4131:afr_notify] 0-gv4-replicate-0: Subvolume 'gv4-client-0' came back up; going online.
[2018-03-09 21:22:42.780084] I [client-handshake.c:450:client_set_lk_version_cbk] 0-gv4-client-0: Server lk version = 1
[2018-03-09 21:22:42.798178] I [client-handshake.c:1462:client_setvolume_cbk] 0-gv4-client-2: Connected to 206.12.82.110:49153, attached to remote volume '/brick2/gv4'.
[2018-03-09 21:22:42.798192] I [client-handshake.c:1474:client_setvolume_cbk] 0-gv4-client-2: Server and Client lk-version numbers are not same, reopening the fds

This looks like an issue with the way that Proxmox mounts Gluster shares. In particular:

[2018-03-09 21:22:42.779910] I [client-handshake.c:1474:client_setvolume_cbk] 0-gv4-client-0: Server and Client lk-version numbers are not same, reopening the fds

We have another Gluster share that's mounted from Fstab, and it works without a hitch. I'm really not sure why this times out, especially considering that I can easily read and write files to the the not-working mount on the command line.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!