lxc restore fail to ceph

Discussion in 'Proxmox VE: Installation and configuration' started by RobFantini, Jan 28, 2017.

  1. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    more info to follow.

    on 2 diff systems

    Code:
    /dev/rbd0
    mke2fs 1.42.12 (29-Aug-2014)
    Discarding device blocks: 4096/1048576 done
    Creating filesystem with 1048576 4k blocks and 262144 inodes
    Filesystem UUID: ca4b4f79-960c-4971-ad74-b9db99d2310a
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736
    
    Allocating group tables: 0/32 done
    Writing inode tables: 0/32 done
    Creating journal (32768 blocks): done
    Multiple mount protection is enabled with update interval 5 seconds.
    Writing superblocks and filesystem accounting information: 0/32 done
    
    extracting archive '/mnt/pve/bkup-longterm/dump/vzdump-lxc-12101-2017_01_28-06_07_02.tar.lzo'
    tar: ./var/log/nginx/nodejs.access.log: Cannot write: No space left on device
    tar: Skipping to next header
    tar: ./var/log/nginx/GNUSparseFile.12379: Cannot mkdir: No space left on device
    
    TASK ERROR: command 'tar xpf /mnt/pve/bkup-longterm/dump/vzdump-lxc-12101-2017_01_28-06_07_02.tar.lzo --totals --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-xattr-write' -C /var/lib/lxc/12102/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2
    
    will try restore to diff filesystem next. restore to zfs worked.

    note there is plenty of space on ceph
     
  2. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    version info -
    Code:
    # pveversion -v
    proxmox-ve: 4.4-78 (running kernel: 4.4.35-2-pve)
    pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
    pve-kernel-4.4.35-1-pve: 4.4.35-77
    pve-kernel-4.4.35-2-pve: 4.4.35-78
    lvm2: 2.02.116-pve3
    corosync-pve: 2.4.0-1
    libqb0: 1.0-1
    pve-cluster: 4.0-48
    qemu-server: 4.0-102
    pve-firmware: 1.1-10
    libpve-common-perl: 4.0-85
    libpve-access-control: 4.0-19
    libpve-storage-perl: 4.0-71
    pve-libspice-server1: 0.12.8-1
    vncterm: 1.2-1
    pve-docs: 4.4-1
    pve-qemu-kvm: 2.7.1-1
    pve-container: 1.0-90
    pve-firewall: 2.0-33
    pve-ha-manager: 1.0-38
    ksm-control-daemon: 1.2-1
    glusterfs-client: 3.5.2-2+deb8u3
    lxc-pve: 2.0.6-5
    lxcfs: 2.0.5-pve2
    criu: 1.6.0-1
    novnc-pve: 0.5-8
    smartmontools: 6.5+svn4324-1~pve80
    zfsutils: 0.6.5.8-pve13~bpo80
    ceph: 10.2.5-1~bpo80+1
    
    cepf disk usage
    Code:
    # ceph df
    GLOBAL:
      SIZE  AVAIL  RAW USED  %RAW USED
      2651G  2126G  525G  19.81
    POOLS:
      NAME  ID  USED  %USED  MAX AVAIL  OBJECTS
      rbd  0  0  0  664G  0
      ceph-lxc  1  18219M  1.75  996G  4653
      ceph-kvm  2  244G  19.73  996G  62777
    
     
  3. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,384
    Likes Received:
    292
    Seem the disk is too small. Try to create a larger disk when you restore (command line).
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    Could someone please suggest a way to make the restore to disk larger then the size pct restore chooses?
     
  5. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,384
    Likes Received:
    292
    I assume the container does not have additional mount points?
    You can simply set the rootfs parameter.

    # pct restore <VMID> <ARCHIVE> --rootfs <STORAGE>:<SIZE_IN_GB>

    So for example, to restore to a 4GB disk on storage 'local' do:

    # pct restore 132 local:backup/vzdump-lxc-108-2015_09_11-11_43_12.tar --rootfs local:4
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. fireon

    fireon Well-Known Member
    Proxmox Subscriber

    Joined:
    Oct 25, 2010
    Messages:
    2,832
    Likes Received:
    162
    How can such a thing come about?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    the source lxc has a 4G disk,
    only 1.8G in use:
    Code:
    # df
    Filesystem  Type  Size  Used Avail Use% Mounted on
    tank/lxc/subvol-12102-disk-1 zfs  4.0G  1.8G  2.3G  45% /
    
    I tried using a 5GB target for lxc on ceph and that also failed:
    Code:
    pct restore 12104 /mnt/pve/bkup-longterm/dump/vzdump-lxc-12102-2017_02_04-06_07_02.tar.lzo --rootfs ceph-lxc3:5
    ..
    tar: ./root/.npm/etag/1.6.0/package: Cannot mkdir: No such file or directory
    tar: ./root: Cannot mkdir: No space left on device
    tar: ./root/.npm/etag/1.6.0/package/package.json: Cannot open: No such file or directory
    tar: ./root: Cannot mkdir: No space left on device
    tar: ./root/.npm/etag/1.6.0/package.tgz: Cannot open: No such file or directory
    tar: ./root: Cannot mkdir: No space left on device
    ..
    Removing image: 98% complete...
    Removing image: 99% complete...
    Removing image: 100% complete...done.
    command 'tar xpf /mnt/pve/bkup-longterm/dump/vzdump-lxc-12102-2017_02_04-06_07_02.tar.lzo --totals --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-xattr-write' -C /var/lib/lxc/12104/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2
    
    so there may be a bug to report?
     
  8. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,183
    Likes Received:
    492
    please post the configuration of the container as well.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    Code:
    # cat 12102.conf
    arch: amd64
    cpulimit: 2
    cpuunits: 1024
    hostname: pad
    memory: 1024
    net0: name=eth0,bridge=vmbr1,gw=10.1.3.1,hwaddr=32:39:30:66:37:33,ip=10.1.3.20/24,type=veth
    onboot: 1
    ostype: debian
    rootfs: lxc-zfs:subvol-12102-disk-1,size=4G
    swap: 512
    
     
  10. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,183
    Likes Received:
    492
    and "zfs get all tank/lxc/subvol-12102-disk-1"?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  11. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    Code:
    sys5  ~ # zfs get all tank/lxc/subvol-12102-disk-1
    NAME  PROPERTY  VALUE  SOURCE
    tank/lxc/subvol-12102-disk-1  type  filesystem  -
    tank/lxc/subvol-12102-disk-1  creation  Thu Feb  2 12:46 2017  -
    tank/lxc/subvol-12102-disk-1  used  1.71G  -
    tank/lxc/subvol-12102-disk-1  available  2.29G  -
    tank/lxc/subvol-12102-disk-1  referenced  1.71G  -
    tank/lxc/subvol-12102-disk-1  compressratio  2.48x  -
    tank/lxc/subvol-12102-disk-1  mounted  yes  -
    tank/lxc/subvol-12102-disk-1  quota  none  default
    tank/lxc/subvol-12102-disk-1  reservation  none  default
    tank/lxc/subvol-12102-disk-1  recordsize  128K  default
    tank/lxc/subvol-12102-disk-1  mountpoint  /tank/lxc/subvol-12102-disk-1  default
    tank/lxc/subvol-12102-disk-1  sharenfs  off  default
    tank/lxc/subvol-12102-disk-1  checksum  on  default
    tank/lxc/subvol-12102-disk-1  compression  lz4  inherited from tank
    tank/lxc/subvol-12102-disk-1  atime  off  inherited from tank
    tank/lxc/subvol-12102-disk-1  devices  on  default
    tank/lxc/subvol-12102-disk-1  exec  on  default
    tank/lxc/subvol-12102-disk-1  setuid  on  default
    tank/lxc/subvol-12102-disk-1  readonly  off  default
    tank/lxc/subvol-12102-disk-1  zoned  off  default
    tank/lxc/subvol-12102-disk-1  snapdir  hidden  default
    tank/lxc/subvol-12102-disk-1  aclinherit  restricted  default
    tank/lxc/subvol-12102-disk-1  canmount  on  default
    tank/lxc/subvol-12102-disk-1  xattr  sa  received
    tank/lxc/subvol-12102-disk-1  copies  1  default
    tank/lxc/subvol-12102-disk-1  version  5  -
    tank/lxc/subvol-12102-disk-1  utf8only  off  -
    tank/lxc/subvol-12102-disk-1  normalization  none  -
    tank/lxc/subvol-12102-disk-1  casesensitivity  sensitive  -
    tank/lxc/subvol-12102-disk-1  vscan  off  default
    tank/lxc/subvol-12102-disk-1  nbmand  off  default
    tank/lxc/subvol-12102-disk-1  sharesmb  off  default
    tank/lxc/subvol-12102-disk-1  refquota  4G  received
    tank/lxc/subvol-12102-disk-1  refreservation  none  default
    tank/lxc/subvol-12102-disk-1  primarycache  all  default
    tank/lxc/subvol-12102-disk-1  secondarycache  all  default
    tank/lxc/subvol-12102-disk-1  usedbysnapshots  0  -
    tank/lxc/subvol-12102-disk-1  usedbydataset  1.71G  -
    tank/lxc/subvol-12102-disk-1  usedbychildren  0  -
    tank/lxc/subvol-12102-disk-1  usedbyrefreservation  0  -
    tank/lxc/subvol-12102-disk-1  logbias  latency  default
    tank/lxc/subvol-12102-disk-1  dedup  off  default
    tank/lxc/subvol-12102-disk-1  mlslabel  none  default
    tank/lxc/subvol-12102-disk-1  sync  standard  default
    tank/lxc/subvol-12102-disk-1  refcompressratio  2.48x  -
    tank/lxc/subvol-12102-disk-1  written  1.71G  -
    tank/lxc/subvol-12102-disk-1  logicalused  3.47G  -
    tank/lxc/subvol-12102-disk-1  logicalreferenced  3.47G  -
    tank/lxc/subvol-12102-disk-1  filesystem_limit  none  default
    tank/lxc/subvol-12102-disk-1  snapshot_limit  none  default
    tank/lxc/subvol-12102-disk-1  filesystem_count  none  default
    tank/lxc/subvol-12102-disk-1  snapshot_count  none  default
    tank/lxc/subvol-12102-disk-1  snapdev  hidden  default
    tank/lxc/subvol-12102-disk-1  acltype  posixacl  received
    tank/lxc/subvol-12102-disk-1  context  none  default
    tank/lxc/subvol-12102-disk-1  fscontext  none  default
    tank/lxc/subvol-12102-disk-1  defcontext  none  default
    tank/lxc/subvol-12102-disk-1  rootcontext  none  default
    tank/lxc/subvol-12102-disk-1  relatime  off  default
    tank/lxc/subvol-12102-disk-1  redundant_metadata  all  default
    tank/lxc/subvol-12102-disk-1  overlay  off  default
    
    
     
  12. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,183
    Likes Received:
    492
    okay, so it's actually 3.4G compressed down to 1.7G, but that should still work. how big are the extracted contents of the backup archive? just extract them somewhere (maybe not ZFS or if you do, on a dataset without compression) and use "du -sh". you can even use the commandline from the log of the failed restore, just change the target folder accordingly.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    so the issue is with the actual storage size. I restored the backup to a ext4 directory:
    Code:
    du -sh /pve/tmp
    7.7G    /pve/tmp
    
    so for a compressed zfs system to be moved to ceph , I wonder if there is a way to know how much storage is needed?
     
  14. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,183
    Likes Received:
    492
    I think in this case the backup somehow contains more data than what was stored on the ZFS dataset - could you compare the contents?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    I checked the data. /var/log/nginx/ had a disk usage of 4G. nodejs.access.log was huge.

    at the running lxc /var/log/nginx was around 1.5G . there could have been something that trimmed some of the files between the time of backup and when I checked. After that I set logrotate to occur daily instead of weekly and to keep only 7 not 50+ log versions.

    maybe zfs compression works amazing at text log files. I do not know.
     
  16. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,183
    Likes Received:
    492
    yes, compression for log files is usually quite good.

    the problem is that the size as configured in PVE is used as a limit for how much data ZFS physically writes. if you make a backup of such a subvol, it can contain a lot more data, and subsequent restores to a non-compressing storage will fail unless you manually pass a bigger volume size.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  17. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    and the tricky part is to know how much storage is needed on the restore to storage.

    currently the only way I know of is to restore the tar file to a temp ext4 directory and use du -sh
     
  18. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,183
    Likes Received:
    492
    we could maybe include this information in the vzdump log for such volumes? not sure..
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  19. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,481
    Likes Received:
    20
    I started to research ceph compression . that could solve this issue if ceph and zfs compression efficiency are comparable.

    does anyone you know of have that working in a production environment ?

    http://docs.ceph.com/docs/master/radosgw/compression/
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice