Change default mountpoint/volume name for LXC container on ZFS storage

Discussion in 'Proxmox VE: Installation and configuration' started by onlime, Aug 25, 2016.

  1. onlime

    onlime Member
    Proxmox Subscriber

    Joined:
    Aug 9, 2013
    Messages:
    44
    Likes Received:
    7
    I have setup a ZFS local storage on latest ProxmoxVE 4.2 with the following options:

    • ID: zfs-containers
    • ZFS Pool: rpool/ROOT
    • Content: Container
    When creating a new LXC container via ProxmoxVE WebUI Create CT, it get's mounted and name as follows:

    Code:
    $ zfs list
    NAME  USED  AVAIL  REFER  MOUNTPOINT
    rpool  13.0G  436G  144K  /rpool
    rpool/ROOT  5.41G  436G  112K  /rpool/ROOT
    rpool/ROOT/pve-1  2.76G  436G  2.76G  /
    rpool/ROOT/subvol-184-disk-1  482M  19.5G  482M  /rpool/ROOT/subvol-184-disk-1
    rpool/swap  7.44G  444G  180K  -
    
    I would like to change default ZFS volume name to pve-CTID instead of subvol-CTID-disk-1 and mountpoint to /var/lib/vz/private/CTID instead of /rpool/ROOT/subvol-CTID-disk-1, resulting in:

    Code:
    rpool/ROOT/pve-184  482M  19.5G  482M  /var/lib/vz/private/184
    
    Can this be done in some global configuration? Where is the naming scheme defined?
     
  2. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,433
    Likes Received:
    301
    I guess we hardcoded that naming scheme a several places, so it would be hard to change that. We also use disk names to encode some informations, so we cannot allow arbitrary name changes.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,645
    Likes Received:
    326
    For the mountpoint problem, your should look at the mountpoint property for the parent pool. I don't know if it works fine with the proxmox ve logic as @dietmar described, but it works in "ordinary" ZFS.

    Also, don't use ROOT als your mounted ZFS pool, create a new one e.g.
    Code:
    zfs create rpool/container
    and use that one. The ROOT is normally used for different root-devices and is a kind of a special place.
     
  4. onlime

    onlime Member
    Proxmox Subscriber

    Joined:
    Aug 9, 2013
    Messages:
    44
    Likes Received:
    7
    Thanks @dietmar and @LnxBil for your great support. I will then stick with the hardcoded naming scheme.
    Finally, I also found the right spot in the documentation:
    https://pve.proxmox.com/wiki/Storage:_ZFS#Using_ZFS_Storage_Plugin_.28via_Proxmox_VE_GUI_or_shell.29

    I followed this recommendation (hopefully this is best practice), creating the ZFS pool and storage:

    Code:
    $ zfs create rpool/zfsdisks
    $ add zfspool zfsvols -pool rpool/zfsdisks -content images,rootdir -sparse
    
    Final result:

    Code:
    $ zfs list
    NAME  USED  AVAIL  REFER  MOUNTPOINT
    rpool  13.0G  436G  144K  /rpool
    rpool/ROOT  4.89G  436G  112K  /rpool/ROOT
    rpool/ROOT/pve-1  2.76G  436G  2.76G  /
    rpool/swap  7.44G  444G  180K  -
    rpool/zfsdisks  560M  436G  96K  /rpool/zfsdisks
    rpool/zfsdisks/subvol-184-disk-1  560M  436G  529M  /rpool/zfsdisks/subvol-184-disk-1
    
    Wouldn't it make sense to mount the CT e.g. at /var/lib/lxc/subvol-184-disk-1 (next to /var/lib/lxc/184/rootfs)? - I am just used to the old OpenVZ way in ProxmoxVE 3.4. I'd like to make sure to follow your recommendations / best practice before migrating all containers to LXC.
     
  5. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,195
    Likes Received:
    494
    "/var/lib/lxc" is where lxc expects its files/paths, so we only put stuff there that is expected (config file + container root mount point)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice