Ubuntu Snaps inside LXC container on Proxmox

UrkoM

New Member
Oct 15, 2014
17
0
1
Hi,
I am trying to test Snap applications inside an Ubuntu 16.04 LXC container in Proxmox, and I am running into problems.
I found this link:
https://stgraber.org/2017/01/31/ubuntu-core-in-lxd-containers/
And it seems snapd needs "unprivileged FUSE mounts and AppArmor namespacing and stacking"'.

Am I trying the impossible here? Has anyone looked into this or has any idea if/when this will be possible on Proxmox?

I could run Ubuntu 16.04 as QEMU VM, but I really like the (maybe theoretical in this case?) performance advantage of LXC for this.

Thanks!
 
Trying to install on PVE5.2-1 on a ubuntu 18.04 LXC and after the install i got some message about kernel needing AppArmor 2.4 compatibility patch or something like that.
I never was able to found a solution do snaps on LXC. I'm running VMs for services like, Rocket.Chat, Wekan, and so on.
 
I was trying to install nextcloud as snap inside an ubuntu xenial container when I faced with this problem. Is my understanding that the problem is related to missing features in the PVE Kernel.

Is this a problem with Proxmox? or the packaged containers? Is this a bug?
 
  • Like
Reactions: dbayer
Is there something easier?
You maybe lucky, this got applied :)
With the pve-container package in version 2.0-28 (or newer) you should be able to set the 'mount' and 'nesting' features and it should work.

This is currently not exposed over the GUI, but you can create a CT as usual there and then open a shell on PVE and do something alike:
Code:
pct set VMID --features mount=1,nesting=1
Edit, above did not work with mount, as this needs a list on accepted file systems, e.g.:
Code:
pct set VMID --features mount=fuse;nfs,nesting=1
on the stopped CT and then, on the next start it should work.
 
Last edited:
You maybe lucky, this got applied :)
With the pve-container package in version 2.0-28 (or newer) you should be able to set the 'mount' and 'nesting' features and it should work.

This is currently not exposed over the GUI, but you can create a CT as usual there and then open a shell on PVE and do something alike:
Code:
pct set VMID --features mount=1,nesting=1
on the stopped CT and then, on the next start it should work.
That's helpful. Thank you.
Is pve-container version 2.0-28 in the test repository? Because I am running pve-container 2.0-25 and that seems to be the only version available. I have this in my source.list:
Code:
deb http://enterprise.proxmox.com/debian/pve stretch pve-enterprise
 
That's helpful. Thank you.
Is pve-container version 2.0-28 in the test repository? Because I am running pve-container 2.0-25 and that seems to be the only version available. I have this in my source.list:
Code:
deb http://enterprise.proxmox.com/debian/pve stretch pve-enterprise

Its in the repository but you have a typo in your sources.list entry (you need https instead of http):

> deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
 
I must be doing something incorrectly.
Code:
david@proxmox:~$ sudo apt update
Ign:1 http://ftp.us.debian.org/debian stable InRelease
Hit:2 http://repo.zabbix.com/zabbix/3.4/debian stretch InRelease                     
Hit:3 http://security.debian.org stable/updates InRelease                           
Hit:4 http://ftp.us.debian.org/debian stable Release                   
Get:5 https://enterprise.proxmox.com/debian/pve stretch InRelease [2,081 B]
Hit:5 https://enterprise.proxmox.com/debian/pve stretch InRelease
Reading package lists... Done                         
Building dependency tree     
Reading state information... Done
All packages are up to date.
david@proxmox:~$ apt-cache policy pve-container
pve-container:
  Installed: 2.0-25
  Candidate: 2.0-25
  Version table:
 *** 2.0-25 100
        100 /var/lib/dpkg/status
david@proxmox:~$

Here is the rest of my pveversion -v
Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.4.98-4-pve: 4.4.98-104
pve-kernel-4.4.79-1-pve: 4.4.79-95
pve-kernel-4.4.59-1-pve: 4.4.59-87
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.6.2~pre+git20161223-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
pve-zsync: 1.6-16
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
 
I'm on PVE 5.2-11 with pve-container 2.0.29 and I'm must be missing something.
On a Ubuntu 18.04 container and the nesting and mouting features enabled.
Installed snapd and bam:

-- Unit snapd.service has finished shutting down.
Nov 27 15:50:31 gsm systemd[1]: snapd.service: Start request repeated too quickly.
Nov 27 15:50:31 gsm systemd[1]: snapd.service: Failed with result 'exit-code'.
Nov 27 15:50:31 gsm systemd[1]: Failed to start Snappy daemon.
-- Subject: Unit snapd.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit snapd.service has failed.
--
-- The result is RESULT.
Nov 27 15:50:31 gsm systemd[1]: snapd.socket: Failed with result 'service-start-limit-hit'.
Nov 27 15:50:35 gsm snap[2814]: error: cannot communicate with server: Get http://localhost/v2/snaps/system/conf?keys=seed.loaded: dial unix /run/snapd.socket: connect: co
Nov 27 15:50:35 gsm systemd[1]: snapd.seeded.service: Main process exited, code=exited, status=1/FAILURE
Nov 27 15:50:35 gsm systemd[1]: snapd.seeded.service: Failed with result 'exit-code'.
Nov 27 15:50:35 gsm systemd[1]: Failed to start Wait until snapd is fully seeded.
 
Snap requires a bit more work. There may soon be a 'fuse' flag for the features option, but fuse can be dangerous. For now you have to do this:

- For unprivileged containers:
1) Put this in /etc/pve/lxc/$vmid.conf:
Code:
...
features: mount=fuse,nesting=1
lxc.mount.entry = /dev/fuse dev/fuse none bind,create=file 0 0
2) Inside the container: `apt install squashfuse`

- For privileged containers, also add:
Code:
...
# EDIT:
# We need to allow apparmor administration, by default mac_admin is dropped for privileged containers.
# Note that you do not want this for un-trusted containers...
lxc.cap.drop =
lxc.cap.drop = mac_override sys_time sys_module sys_rawio
Alternatively to squashfuse, privileged containers could use loop devices, but I wouldn't recommend it...


Note that enabling `fuse` in a container does not play well with backups, or anything that causes an `lxc-freeze` command to be executed on the container, as this can cause deadlocks in the kernel...
 
Last edited:
  • Like
Reactions: clipz98
Hi Wolfgang,

Is there any progress on integrating snap into LXC containers? So far, if I'd like to e.g. install wekan, I'll need a VM which is not my favourite way to run Linux software on Proxmox.

Any suggestions?
 
Hey team, just wanted to say I've got wekan running in an LXC container, so I think this thread has maybe come full loop. I don't actually have to hand-modify the conf either, just go to the container > options > features > tick FUSE and Nesting, restart container, snapd install wekan fails first time with errors:

Code:
error: cannot perform the following tasks:
- Setup snap "core" (8935) security profiles (cannot setup udev for snap "core": cannot reload udev rules: exit status 2
udev output:
)
- Setup snap "core" (8935) security profiles (cannot reload udev rules: exit status 2
udev output:
)

but then just re-running the exact same command, it completes successfully and works.
 
Hi,
Here when I try
Code:
snap install wekan
I always have an error :

Code:
error: system does not fully support snapd: cannot mount squashfs image using "fuse.squashfuse":
       mount: /tmp/sanity-mountpoint-494747820: wrong fs type, bad option, bad superblock on
       /tmp/sanity-squashfs-152452673, missing codepage or helper program, or other error.

I tried both of solutions : manually edit config file / just tick options, but the problem is always the same, it does not work :(
 
  • Like
Reactions: networ
I'm successfully running Wekan via Snap on an Ubuntu 19.04 container (which was now upgraded to 19.10).

These are my LXC settings:

Code:
arch: amd64
cores: 4
features: keyctl=1,nesting=1,fuse=1
hookscript: local:snippets/pve-hook
hostname: projekte
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.100.1,hwaddr=xx,ip=xx/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: rpool:subvol-121-disk-0,acl=1,size=32G
swap: 512
unprivileged: 1