Hi,
I upgraded from 4.4 to 5.0 with a fresh install like it was recommended. I backed up my VMs and LXCs and restored them on the fresh install. All went fine except one LXC, my fileserver. The CT is about 400 GB of data and a rootfs size of 550 GB. The restore process hangs after extracting 60 GB (exactly 58.3 GB) of the archive, every time. The process ran also over night and for more than 2 days but nothing happens. I hang there for over 1 week.
I have no clue why this happens, also tried to restore to NFS storage, from NFS storage to local-zfs, copied the vzdump to /var/lib/vz/dump and tried from there, it stuck everytime at the size of 60 GB.
I tried also from web and console (pct restore 200 -storage local-zfs vzdump-lxc-200-2017_06_24-13_00_22.tar.gz) and the archive is ok, I extracted it to /var/lib/vz/dump/200 without errors.
Is there a possibility to restore the CT manually, lets say create a new CT and copy/extract/rsync the tar over it?
Here are my systemconfigs PVE 5.0 with zfs:
ZFS Config:
Enough disk space left:
Archive and extracted archive:
Pls, let me know if you need further information.
BR,
Andreas
I upgraded from 4.4 to 5.0 with a fresh install like it was recommended. I backed up my VMs and LXCs and restored them on the fresh install. All went fine except one LXC, my fileserver. The CT is about 400 GB of data and a rootfs size of 550 GB. The restore process hangs after extracting 60 GB (exactly 58.3 GB) of the archive, every time. The process ran also over night and for more than 2 days but nothing happens. I hang there for over 1 week.
I have no clue why this happens, also tried to restore to NFS storage, from NFS storage to local-zfs, copied the vzdump to /var/lib/vz/dump and tried from there, it stuck everytime at the size of 60 GB.
I tried also from web and console (pct restore 200 -storage local-zfs vzdump-lxc-200-2017_06_24-13_00_22.tar.gz) and the archive is ok, I extracted it to /var/lib/vz/dump/200 without errors.
Is there a possibility to restore the CT manually, lets say create a new CT and copy/extract/rsync the tar over it?
Here are my systemconfigs PVE 5.0 with zfs:
Code:
root@heimdall:~# pveversion -v
proxmox-ve: 5.0-15 (running kernel: 4.10.15-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.10.11-1-pve: 4.10.11-9
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-10
qemu-server: 5.0-12
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-8
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-14
pve-firewall: 3.0-1
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
root@heimdall:~#
ZFS Config:
Code:
root@heimdall:~# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 0h44m with 0 errors on Sun Jul 9 01:08:28 2017
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
logs
nvme0n1p1 ONLINE 0 0 0
cache
nvme0n1p2 ONLINE 0 0 0
errors: No known data errors
root@heimdall:~#
root@heimdall:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 893G 2.64T 96K /rpool
rpool/ROOT 785G 2.64T 96K /rpool/ROOT
rpool/ROOT/pve-1 785G 2.64T 785G /
rpool/data 99.2G 2.64T 96K /rpool/data
rpool/data/subvol-200-disk-1 58.3G 492G 58.3G /rpool/data/subvol-200-disk-1
rpool/data/subvol-201-disk-1 1.03G 49.0G 1.03G /rpool/data/subvol-201-disk-1
rpool/data/subvol-202-disk-1 779M 7.24G 779M /rpool/data/subvol-202-disk-1
rpool/data/subvol-203-disk-1 685M 49.3G 685M /rpool/data/subvol-203-disk-1
rpool/data/subvol-204-disk-1 9.83G 40.2G 9.83G /rpool/data/subvol-204-disk-1
rpool/data/vm-100-disk-1 1.14G 2.64T 1.14G -
rpool/data/vm-300-disk-1 27.5G 2.64T 27.5G -
rpool/swap 8.50G 2.65T 601M -
root@heimdall:~#
Enough disk space left:
Code:
root@heimdall:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 6.9G 0 6.9G 0% /dev
tmpfs 1.4G 9.0M 1.4G 1% /run
rpool/ROOT/pve-1 3.5T 786G 2.7T 23% /
tmpfs 6.9G 46M 6.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 6.9G 0 6.9G 0% /sys/fs/cgroup
rpool 2.7T 128K 2.7T 1% /rpool
rpool/ROOT 2.7T 128K 2.7T 1% /rpool/ROOT
rpool/data 2.7T 128K 2.7T 1% /rpool/data
rpool/data/subvol-201-disk-1 50G 1.1G 49G 3% /rpool/data/subvol-201-disk-1
rpool/data/subvol-202-disk-1 8.0G 780M 7.3G 10% /rpool/data/subvol-202-disk-1
rpool/data/subvol-203-disk-1 50G 685M 50G 2% /rpool/data/subvol-203-disk-1
rpool/data/subvol-204-disk-1 50G 9.9G 41G 20% /rpool/data/subvol-204-disk-1
/dev/fuse 30M 20K 30M 1% /etc/pve
172.16.10.4:/raid0/data/_NAS_NFS_Exports_/proxmox 8.6T 1.3T 7.3T 15% /mnt/pve/thecus-proxmox
tmpfs 1.4G 0 1.4G 0% /run/user/0
rpool/data/subvol-200-disk-1 550G 59G 492G 11% /var/lib/lxc/200/rootfs
root@heimdall:~#
Archive and extracted archive:
Code:
oot@heimdall:~# ls -alh /var/lib/vz/dump/
total 388G
drwxr-xr-x 3 root root 4 Jul 9 22:10 .
drwxr-xr-x 7 root root 7 Jun 28 14:51 ..
drwxr-xr-x 21 root root 21 May 30 23:26 200
-rw-r--r-- 1 root root 388G Jul 8 07:43 vzdump-lxc-200-2017_06_24-13_00_22.tar.gz
root@heimdall:~#
root@heimdall:~# du -sh /var/lib/vz/dump/200/
397G /var/lib/vz/dump/200/
root@heimdall:~#
Pls, let me know if you need further information.
BR,
Andreas