Can't create LXC container on ZFS local

Discussion in 'Proxmox VE: Installation and configuration' started by gkovacs, Oct 12, 2015.

  1. gkovacs

    gkovacs Active Member

    Joined:
    Dec 22, 2008
    Messages:
    499
    Likes Received:
    43
    This is my first round of testing LXC in PVE 4. Had to stop kind of early on...

    - Installed PVE 4 on ZFS RAID10 (4 disks)
    - Updated to latest packages + reboot
    - Downloaded Debian 8 LXC template
    - Trying to create CT on local storage

    Code:
    Formatting '/var/lib/vz/images/100/vm-100-disk-2.raw', fmt=raw size=10737418240
    Discarding device blocks:    4096/2621440               done                            
    Creating filesystem with 2621440 4k blocks and 655360 inodes
    Filesystem UUID: 13b2c657-77b9-43d6-b1a9-6f4ca55f47dd
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables:  0/80     done                            
    Writing inode tables:  0/80     done                            
    mke2fs 1.42.12 (29-Aug-2014)
    Creating journal (32768 blocks): done
    Multiple mount protection is enabled with update interval 5 seconds.
    Writing superblocks and filesystem accounting information:  0/80     
    [COLOR=#ff0000]Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O mmp /var/lib/vz/images/100/vm-100-disk-2.raw' failed: exit code 144[/COLOR]
    The RAW disks get created under /var/lib/vz, but the container creation process stops with Error: unexpected status.

    Apart from the solving the above problem, probably a more graceful way of handling errors during the container creation process would benefit users.
     
    #1 gkovacs, Oct 12, 2015
    Last edited: Oct 12, 2015
  2. gkovacs

    gkovacs Active Member

    Joined:
    Dec 22, 2008
    Messages:
    499
    Likes Received:
    43
    Ok, so after some forum reading I found out that I need to create a new storage (type zfs) for containers, with it's root set to rpool (otherwise it won't work). This is far from being user-friendly, as there are no warnings during installation or container creation, and people expect simple functions (like Create CT) work out of the box.

    I suggest that the installer create a storage called "containers" automatically if installation was on ZFS, and the container creator should either select it by default, or at least there should be a warning or error (red outline) on the container storage if set to local.
     
    MikeP likes this.
  3. sigxcpu

    sigxcpu Member

    Joined:
    May 4, 2012
    Messages:
    433
    Likes Received:
    9
    Actually the real bug here is that local on ZFS is tagged as being able to host "Containers", which seems to be wrong.
     
  4. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,286
    Likes Received:
    369
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. gkovacs

    gkovacs Active Member

    Joined:
    Dec 22, 2008
    Messages:
    499
    Likes Received:
    43
    Okay, so storing containers on ZFS local storage does not work.
    Hopefully restoring OpenVZ backups to LXC containers stored on newly created ZFS storage will.
     
  6. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,286
    Likes Received:
    369
    Just use the ZFS plugin for LXC container, works great.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    MikeP likes this.
  7. sigxcpu

    sigxcpu Member

    Joined:
    May 4, 2012
    Messages:
    433
    Likes Received:
    9
    I can attest to that. I've always used "dir" on ZFS for containers before 4.0. Now that I have them separate, each on its dataset is beautiful. Actually, that made me migrate back from SmartOS :)
     
    MikeP likes this.
  8. gkovacs

    gkovacs Active Member

    Joined:
    Dec 22, 2008
    Messages:
    499
    Likes Received:
    43
    I guess you mean storage plugin. So if I understand correctly, I have to restore the LXC containers with the "--storage" option to the newly created ZFS type storage if I want them to work.

    So let's say my ZFS type storage is called "zfs-containers", then I would restore my OpenVZ backups to LXC with the following command:
    Code:
    pct restore 100 vzdump-openvz-100-2015_10_12-11_22_33.tar --storage zfs-containers
    You might want to add this to the wiki at https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC
     
    #8 gkovacs, Oct 12, 2015
    Last edited: Oct 12, 2015
  9. MikeP

    MikeP Member

    Joined:
    Feb 14, 2016
    Messages:
    39
    Likes Received:
    3
    And, how do you do that? -- create a new storage, type zfs, with root set to rpool.
    I've got a zfs mirror SSD set for boot, but another ZFS mirror that I want to put basically everything on. I get the error and can't make a container.
    Thanks in advance.
     
  10. MikeP

    MikeP Member

    Joined:
    Feb 14, 2016
    Messages:
    39
    Likes Received:
    3
    What is ZFS plugin for LSC container? I did a search and can't find anything useful.
    Thanks in advance.
     
  11. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,167
    Likes Received:
    488
    PVE's LXC Storage plugin - it is what is used when you create a new storage of type ZFS in the web interface (and subsequently use it in pct/qm commands, LXC and VM config files, etc)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. MikeP

    MikeP Member

    Joined:
    Feb 14, 2016
    Messages:
    39
    Likes Received:
    3
    Oh, I see. I just considered that part of the system, rather than something that I needed to add as a plugin, like Firefox and Chrome plugins must be downloaded and plugged in separately.
     
  13. MikeP

    MikeP Member

    Joined:
    Feb 14, 2016
    Messages:
    39
    Likes Received:
    3
    I've got a zfs mirror SSD set for boot, but another ZFS HDD mirror that I want to put basically everything on. I get the error "failed: exit code 144" and can't make a container or a VM disk image, "failed: exit code 1".
    Thanks in advance.

    root@pve2:/# zpool list
    NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
    rpool 111G 1.11G 110G - 0% 0% 1.00x ONLINE -
    z3tred 2.72T 190M 2.72T - 0% 0% 1.00x ONLINE -


    root@pve2:/# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 15.9G 91.7G 96K /rpool
    rpool/ROOT 1.10G 91.7G 96K /rpool/ROOT
    rpool/ROOT/pve-1 1.10G 91.7G 1.10G /
    rpool/swap 14.7G 106G 64K -
    z3tred 190M 2.63T 104K /z3tred
    z3tred/backups 96K 2.63T 96K /z3tred/backups
    z3tred/containerstorage 96K 2.63T 96K /z3tred/containerstorage
    z3tred/containertemplates 189M 2.63T 189M /z3tred/containertemplates
    z3tred/diskimages 96K 2.63T 96K /z3tred/diskimages
    z3tred/iso 96K 2.63T 96K /z3tred/iso


    root@pve2:/# pvesm status
    local dir 1 97287552 1157248 96130304 1.69%
    sambadisk1 dir 1 2930081788 2505639992 424441796 86.01%
    zbackups dir 1 2827814784 0 2827814784 0.50%
    zcontainerstorage dir 1 2827814784 0 2827814784 0.50%
    zcontainertemplates dir 1 2828008576 193792 2827814784 0.51%
    zdiskimages dir 1 2827814784 0 2827814784 0.50%
    ziso dir 1 2827814784 0 2827814784 0.50%
     
  14. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,167
    Likes Received:
    488
    You need to add your zfs pools as "zfspool", not as "dir" storage in PVE. PVE will then create subvolumes for snapshots, container and vm images, .. automatically.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. MikeP

    MikeP Member

    Joined:
    Feb 14, 2016
    Messages:
    39
    Likes Received:
    3
    Ok, Thanks.
    In DataCenter, Storage, Add, ZFS, the only pool shown is /rpool and it's subs. My other ZFS pool didn't appear.

    I tried this and got an error:
    root@pve2:~# pvesm add zfs -type zfspool -pool z3tred
    400 not enough arguments


    root@pve2:~# pvesm add zfs z3tered -type zfspool --pool z3tred
    missing value for required option 'blocksize'


    Finally, I got to this command which seems to be working:

    root@pve2:~# pvesm add zfspool z3tred -pool z3tred -content images,rootdir -sparse
     
    #15 MikeP, Feb 22, 2016
    Last edited: Feb 22, 2016
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice