Hi,
Don't know why but we can't restore a vzdump of an LXC container into a Proxmox cluster composed of 6 nodes with ZFS.
This is the situation:
We have a Proxmox Cluster with 6 nodes, each one running with ZFS. Each night dumps are made of al containers and VM and set fia FTP and SCP to different locations.
We wanted to restore an LXC Container dump to the cluster with another ID. The real LXC container (ID 234) is still runnign in the cluster and we are restoring the dump of 2 days ago with the ID 99X.
Each time we try to restor we get an error and it's impossible to restore it.
This is what we get:
extracting archive '/var/lib/vz/dump/vzdump-lxc-234-2019_10_16-04_28_05.tar.lzo'
tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted
tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted
Total bytes read: 64203161600 (60GiB, 150MiB/s)
tar: Exiting with failure status due to previous errors
TASK ERROR: unable to restore CT 999 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --lzop --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/999/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2
The same if we try to restore to another ID and even with the dump of 1 day ago.
Instead, we moved the tar.lzo file to another Proxmox server ( no ZFS) outside the cluster and we made a restore. Everything runned ok and the vzdump with ID 999 is restored and we could recover files.
We checked logs of vzdump and both days vzdump runned ok and dump was made ok.
The question is, why can't we restore an old vzdump into the cluster using another ID ?
I paste here versions of pve-*
pve-cluster 5.0-37
pve-container 2.0-39
pve-docs 5.4-2
pve-edk2-firmware 1.20190312-1
pve-firewall 3.0-21
pve-firmware 2.0-6
pve-ha-manager 2.0-9
pve-i18n 1.1-4
pve-kernel-4.15 5.4-2
pve-kernel-4.15.18-14-pve 4.15.18-39
pve-libspice-server1 0.14.1-2
pve-manager 5.4-6
pve-qemu-kvm 3.0.1-2
Also, on each node cluster replication is made every 5 minutes of each node in another node, in a circular way.
Logs about dumps are clear and don't throw any error, and furthermore, dump could be restored in a Proxmox server out of the Cluster.
could someone tell us what are we doing wrong?
Regards,
Don't know why but we can't restore a vzdump of an LXC container into a Proxmox cluster composed of 6 nodes with ZFS.
This is the situation:
We have a Proxmox Cluster with 6 nodes, each one running with ZFS. Each night dumps are made of al containers and VM and set fia FTP and SCP to different locations.
We wanted to restore an LXC Container dump to the cluster with another ID. The real LXC container (ID 234) is still runnign in the cluster and we are restoring the dump of 2 days ago with the ID 99X.
Each time we try to restor we get an error and it's impossible to restore it.
This is what we get:
extracting archive '/var/lib/vz/dump/vzdump-lxc-234-2019_10_16-04_28_05.tar.lzo'
tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted
tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted
Total bytes read: 64203161600 (60GiB, 150MiB/s)
tar: Exiting with failure status due to previous errors
TASK ERROR: unable to restore CT 999 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --lzop --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/999/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2
The same if we try to restore to another ID and even with the dump of 1 day ago.
Instead, we moved the tar.lzo file to another Proxmox server ( no ZFS) outside the cluster and we made a restore. Everything runned ok and the vzdump with ID 999 is restored and we could recover files.
We checked logs of vzdump and both days vzdump runned ok and dump was made ok.
The question is, why can't we restore an old vzdump into the cluster using another ID ?
I paste here versions of pve-*
pve-cluster 5.0-37
pve-container 2.0-39
pve-docs 5.4-2
pve-edk2-firmware 1.20190312-1
pve-firewall 3.0-21
pve-firmware 2.0-6
pve-ha-manager 2.0-9
pve-i18n 1.1-4
pve-kernel-4.15 5.4-2
pve-kernel-4.15.18-14-pve 4.15.18-39
pve-libspice-server1 0.14.1-2
pve-manager 5.4-6
pve-qemu-kvm 3.0.1-2
Also, on each node cluster replication is made every 5 minutes of each node in another node, in a circular way.
Logs about dumps are clear and don't throw any error, and furthermore, dump could be restored in a Proxmox server out of the Cluster.
could someone tell us what are we doing wrong?
Regards,