PVE from RAMFS & LXC

dale

Renowned Member
Mar 19, 2010
34
0
71
Hello, community.

Cause of running PVE node from ramfs lxc containers not starting (VMs - ok.)

config:

arch: amd64
cores: 1
hostname: a
memory: 512
net0: name=eth0,bridge=vmbr1,hwaddr=A6:5B:35:4B:2C:85,ip6=auto,tag=6,type=veth
ostype: alpine
rootfs: local:101/vm-101-disk-0.raw,size=512M
swap: 512

Starting with debug:

lxc-start -F -n 101 --logfile 101.log --logpriority=DEBUG

debug attached.


Regards, dale.
 

Attachments

  • 101.log
    18.1 KB · Views: 13
Hello, community.

Cause of running PVE node from ramfs lxc containers not starting (VMs - ok.)

config:

arch: amd64
cores: 1
hostname: a
memory: 512
net0: name=eth0,bridge=vmbr1,hwaddr=A6:5B:35:4B:2C:85,ip6=auto,tag=6,type=veth
ostype: alpine
rootfs: local:101/vm-101-disk-0.raw,size=512M
swap: 512

Starting with debug:

lxc-start -F -n 101 --logfile 101.log --logpriority=DEBUG

debug attached.


Regards, dale.
For understanding the case more information is necessary. If analysis is still requested post a pvereport.
 
pvereport output attached
 

Attachments

  • pvereport.txt
    35.8 KB · Views: 6
IIUC, CT not started cause of (from debugging output submitted earlier)


lxc-start 101 20190830122341.107 ERROR conf - conf.c:lxc_chroot:1389 - Permission denied - Failed to mount "/usr/lib/x86_64-linux-gnu/lxc/rootfs" onto "/" as MS_REC | MS_BIND
lxc-start 101 20190830122341.107 ERROR conf - conf.c:lxc_setup:3697 - Failed to pivot root into rootfs
lxc-start 101 20190830122341.107 ERROR start - start.c:do_start:1279 - Failed to setup container "101"

but couldn't found solution. :(
 
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/alpine.common.conf
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.monitor.unshare = 1
lxc.tty.max = 2
lxc.environment = TERM=linux
lxc.uts.name = a1
lxc.cgroup.memory.limit_in_bytes = 536870912
lxc.cgroup.memory.memsw.limit_in_bytes = 1073741824
lxc.cgroup.cpu.shares = 1024
lxc.rootfs.path = /var/lib/lxc/101/rootfs
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth101i0
lxc.net.0.hwaddr = A6:5B:35:4B:2C:85
lxc.net.0.name = eth0
lxc.cgroup.cpuset.cpus = 0
 
dir: local
path /var/lib/vz
content rootdir,iso,snippets,images,vztmpl
maxfiles 0

zfspool: z0
pool zp00l
content rootdir,images
sparse 1
 
be useful to look at your storage.cfg as well, but it doesnt look like your storage is actually writable.

Really?

root@b02:~# ls -la /var/lib/vz/images/101/
total 9960
drwxr----- 2 root root 4096 Aug 30 12:35 .
drwxr-xr-x 5 root root 4096 Sep 11 15:28 ..
-rw-r----- 1 root root 536870912 Sep 11 15:35 vm-101-disk-0.raw

root@b02:~# mount -o loop,noatime /var/lib/vz/images/101/vm-101-disk-0.raw /mnt/tmp
root@b02:~# touch /mnt/tmp/foo
root@b02:~# ls -la /mnt/tmp/foo
-rw-r--r-- 1 root root 0 Sep 11 15:33 /mnt/tmp/foo

It's seems writeable for me.
 
I see. that makes more sense then.

The answer lies in the documentation for ramfs (https://github.com/torvalds/linux/blob/v4.17/Documentation/filesystems/ramfs-rootfs-initramfs.txt)

When switching another root device, initrd would pivot_root and then umount the ramdisk. But initramfs is rootfs: you can neither pivot_root rootfs, nor unmount it. Instead delete everything out of rootfs to free up the space (find -xdev / -exec rm '{}' ';'), overmount rootfs with the new root (cd /newmount; mount --move . /; chroot .), attach stdin/stdout/stderr to the new /dev/console, and exec the new init.

In short- you should be able to use the ramdisk for virtual machines, but for containers it would require a change in proxmox. I dont think its very difficult, you may wish to put in a feature request in bugzilla.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!