Ubuntu Snaps inside LXC container on Proxmox

UrkoM

New Member
Oct 15, 2014
17
0
1
Hi,
I am trying to test Snap applications inside an Ubuntu 16.04 LXC container in Proxmox, and I am running into problems.
I found this link:
https://stgraber.org/2017/01/31/ubuntu-core-in-lxd-containers/
And it seems snapd needs "unprivileged FUSE mounts and AppArmor namespacing and stacking"'.

Am I trying the impossible here? Has anyone looked into this or has any idea if/when this will be possible on Proxmox?

I could run Ubuntu 16.04 as QEMU VM, but I really like the (maybe theoretical in this case?) performance advantage of LXC for this.

Thanks!
 

rmundel

Member
May 9, 2015
26
2
23
Trying to install on PVE5.2-1 on a ubuntu 18.04 LXC and after the install i got some message about kernel needing AppArmor 2.4 compatibility patch or something like that.
I never was able to found a solution do snaps on LXC. I'm running VMs for services like, Rocket.Chat, Wekan, and so on.
 

Carlos Estrada

New Member
Feb 18, 2016
3
1
1
34
I was trying to install nextcloud as snap inside an ubuntu xenial container when I faced with this problem. Is my understanding that the problem is related to missing features in the PVE Kernel.

Is this a problem with Proxmox? or the packaged containers? Is this a bug?
 
  • Like
Reactions: dbayer

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,695
258
103
South Tyrol/Italy

davidg1982

New Member
May 26, 2017
12
2
3
37
Chicago, Illinois USA

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,695
258
103
South Tyrol/Italy
Is there something easier?
You maybe lucky, this got applied :)
With the pve-container package in version 2.0-28 (or newer) you should be able to set the 'mount' and 'nesting' features and it should work.

This is currently not exposed over the GUI, but you can create a CT as usual there and then open a shell on PVE and do something alike:
Code:
pct set VMID --features mount=1,nesting=1
Edit, above did not work with mount, as this needs a list on accepted file systems, e.g.:
Code:
pct set VMID --features mount=fuse;nfs,nesting=1
on the stopped CT and then, on the next start it should work.
 
Last edited:

davidg1982

New Member
May 26, 2017
12
2
3
37
Chicago, Illinois USA
You maybe lucky, this got applied :)
With the pve-container package in version 2.0-28 (or newer) you should be able to set the 'mount' and 'nesting' features and it should work.

This is currently not exposed over the GUI, but you can create a CT as usual there and then open a shell on PVE and do something alike:
Code:
pct set VMID --features mount=1,nesting=1
on the stopped CT and then, on the next start it should work.
That's helpful. Thank you.
Is pve-container version 2.0-28 in the test repository? Because I am running pve-container 2.0-25 and that seems to be the only version available. I have this in my source.list:
Code:
deb http://enterprise.proxmox.com/debian/pve stretch pve-enterprise
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,780
444
103
That's helpful. Thank you.
Is pve-container version 2.0-28 in the test repository? Because I am running pve-container 2.0-25 and that seems to be the only version available. I have this in my source.list:
Code:
deb http://enterprise.proxmox.com/debian/pve stretch pve-enterprise
Its in the repository but you have a typo in your sources.list entry (you need https instead of http):

> deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
 

davidg1982

New Member
May 26, 2017
12
2
3
37
Chicago, Illinois USA
I must be doing something incorrectly.
Code:
david@proxmox:~$ sudo apt update
Ign:1 http://ftp.us.debian.org/debian stable InRelease
Hit:2 http://repo.zabbix.com/zabbix/3.4/debian stretch InRelease                     
Hit:3 http://security.debian.org stable/updates InRelease                           
Hit:4 http://ftp.us.debian.org/debian stable Release                   
Get:5 https://enterprise.proxmox.com/debian/pve stretch InRelease [2,081 B]
Hit:5 https://enterprise.proxmox.com/debian/pve stretch InRelease
Reading package lists... Done                         
Building dependency tree     
Reading state information... Done
All packages are up to date.
david@proxmox:~$ apt-cache policy pve-container
pve-container:
  Installed: 2.0-25
  Candidate: 2.0-25
  Version table:
 *** 2.0-25 100
        100 /var/lib/dpkg/status
david@proxmox:~$
Here is the rest of my pveversion -v
Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.4.98-4-pve: 4.4.98-104
pve-kernel-4.4.79-1-pve: 4.4.79-95
pve-kernel-4.4.59-1-pve: 4.4.59-87
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.6.2~pre+git20161223-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
pve-zsync: 1.6-16
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
 

rmundel

Member
May 9, 2015
26
2
23
I'm on PVE 5.2-11 with pve-container 2.0.29 and I'm must be missing something.
On a Ubuntu 18.04 container and the nesting and mouting features enabled.
Installed snapd and bam:

-- Unit snapd.service has finished shutting down.
Nov 27 15:50:31 gsm systemd[1]: snapd.service: Start request repeated too quickly.
Nov 27 15:50:31 gsm systemd[1]: snapd.service: Failed with result 'exit-code'.
Nov 27 15:50:31 gsm systemd[1]: Failed to start Snappy daemon.
-- Subject: Unit snapd.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit snapd.service has failed.
--
-- The result is RESULT.
Nov 27 15:50:31 gsm systemd[1]: snapd.socket: Failed with result 'service-start-limit-hit'.
Nov 27 15:50:35 gsm snap[2814]: error: cannot communicate with server: Get http://localhost/v2/snaps/system/conf?keys=seed.loaded: dial unix /run/snapd.socket: connect: co
Nov 27 15:50:35 gsm systemd[1]: snapd.seeded.service: Main process exited, code=exited, status=1/FAILURE
Nov 27 15:50:35 gsm systemd[1]: snapd.seeded.service: Failed with result 'exit-code'.
Nov 27 15:50:35 gsm systemd[1]: Failed to start Wait until snapd is fully seeded.
 

wbumiller

Proxmox Staff Member
Staff member
Jun 23, 2015
647
88
48
Snap requires a bit more work. There may soon be a 'fuse' flag for the features option, but fuse can be dangerous. For now you have to do this:

- For unprivileged containers:
1) Put this in /etc/pve/lxc/$vmid.conf:
Code:
...
features: mount=fuse,nesting=1
lxc.mount.entry = /dev/fuse dev/fuse none bind,create=file 0 0
2) Inside the container: `apt install squashfuse`

- For privileged containers, also add:
Code:
...
# EDIT:
# We need to allow apparmor administration, by default mac_admin is dropped for privileged containers.
# Note that you do not want this for un-trusted containers...
lxc.cap.drop =
lxc.cap.drop = mac_override sys_time sys_module sys_rawio
Alternatively to squashfuse, privileged containers could use loop devices, but I wouldn't recommend it...


Note that enabling `fuse` in a container does not play well with backups, or anything that causes an `lxc-freeze` command to be executed on the container, as this can cause deadlocks in the kernel...
 
Last edited:
Dec 15, 2016
55
3
13
55
Berlin
Hi Wolfgang,

Is there any progress on integrating snap into LXC containers? So far, if I'd like to e.g. install wekan, I'll need a VM which is not my favourite way to run Linux software on Proxmox.

Any suggestions?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!