Unable to move volume (Root Disk) / Restore backup on main SSD drive

Asiier

New Member
Apr 15, 2020
6
1
3
26
Hello everybody!
I'm trying to move on of my Container to my main storage drive (SSD) from my other drive (HDD using ZFS).
I installed (restored backup) by error this Container on my HDD but now I'd like move it back to my SSD so it's performs faster.

The problem is whenever I try to restore the backup I get the following errors:

The bootdrive is not even half full but I tried to manually restore the backup assigning more space on the drive. In that case it would create, or rather, restore the Container but I'm unable to start it and no error are thrown.
Code:
pct restore 115 /path/to/backup --storage local-lvm --rootfs local-lvm:3,size=3G --force

If I restore it on the HHD everything works well, but If I try to move the volume to my SSD I get the following error:
Code:
  Logical volume "vm-115-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks:   4096/524288             done                           
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 08fa8716-183a-4efa-b34c-dc15c483ce2f
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912

Allocating group tables:  0/16     done                           
Writing inode tables:  0/16     done                           
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information:  0/16     done

rsync: write failed on "/var/lib/lxc/115/.copy-volume-1/usr/lib/python2.7/dist-packages/six.py": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]
  Logical volume "vm-115-disk-0" successfully removed
TASK ERROR: command '/usr/bin/rsync --stats -X -A --numeric-ids -aH --whole-file --sparse --one-file-system '--bwlimit=0' /var/lib/lxc/115/.copy-volume-2/ /var/lib/lxc/115/.copy-volume-1' failed: exit code 11

Thank you in advance!
 
Can you post the output of 'df -h' as well as 'zfs list' (if using ZFS) or 'lvs -a' (if using LVM)?
 
  • Like
Reactions: Asiier
Can you post the output of 'df -h' as well as 'zfs list' (if using ZFS) or 'lvs -a' (if using LVM)?
Sure!
I use ZFS for my 6 TB HDD and LVM for my 256 GB SSD and another 1 TB HDD

Code:
root@Proxmox:~# df -h
Filesystem               Size  Used Avail Use% Mounted on
udev                     7.8G     0  7.8G   0% /dev
tmpfs                    1.6G  129M  1.5G   9% /run
/dev/mapper/pve-root      55G  4.0G   48G   8% /
tmpfs                    7.8G   22M  7.8G   1% /dev/shm
tmpfs                    5.0M     0  5.0M   0% /run/lock
tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup
WD6TB                    3.4T  128K  3.4T   1% /WD6TB
WD6TB/Backups            3.5T   55G  3.4T   2% /WD6TB/Backups
/dev/fuse                 30M   20K   30M   1% /etc/pve
WD6TB/subvol-115-disk-0  2.0G  1.3G  819M  61% /WD6TB/subvol-115-disk-0
tmpfs                    1.6G     0  1.6G   0% /run/user/0

Code:
root@Proxmox:~# zfs list
NAME                      USED  AVAIL     REFER  MOUNTPOINT
WD6TB                    1.91T  3.37T      104K  /WD6TB
WD6TB/Backups            54.5G  3.37T     54.5G  /WD6TB/Backups
WD6TB/subvol-115-disk-0  1.20G   819M     1.20G  /WD6TB/subvol-115-disk-0
WD6TB/vm-101-disk-0      1.86T  3.37T     1.86T  -

Code:
root@Proxmox:~# lvs -a
  LV              VG      Attr       LSize   Pool    Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve     twi-aotz-- 140.45g                36.43  2.77         
  [data_tdata]    pve     Twi-ao---- 140.45g                                   
  [data_tmeta]    pve     ewi-ao----   1.43g                                   
  [lvol0_pmspare] pve     ewi-------   1.43g                                   
  root            pve     -wi-ao----  55.75g                                   
  swap            pve     -wi-ao----   8.00g                                   
  vm-101-disk-0   pve     Vwi-aotz--  30.00g data           99.10               
  vm-102-disk-0   pve     Vwi-aotz--   6.50g data           91.56               
  vm-105-disk-0   pve     Vwi-aotz--  15.00g data           90.61               
  vm-110-disk-0   pve     Vwi-aotz--   2.00g data           94.92               
  [lvol0_pmspare] storage ewi-------  <9.32g                                   
  storage         storage twi-aotz-- 912.76g                52.95  2.62         
  [storage_tdata] storage Twi-ao---- 912.76g                                   
  [storage_tmeta] storage ewi-ao----  <9.32g                                   
  vm-101-disk-0   storage Vwi-a-tz-- 750.00g storage        64.44               
  vm-101-disk-1   storage Vwi-aotz-- 650.00g storage        0.00
 
How are you copying or restoring the backups? Can you post the exact command or GUI path (screenshots?) you are using? It certainly seems confusing, the error is definitely an "out-of-disk-space" one, but I don't see a disk where that would be the case...

Also, what's in your /etc/pve/storage.cfg?

The bootdrive is not even half full but I tried to manually restore the backup assigning more space on the drive. In that case it would create, or rather, restore the Container but I'm unable to start it and no error are thrown.
I'm a bit confused by this part. What exactly did you attempt here?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!