LXC - Cannot assign a block device to container

mjb2000

New Member
Jul 16, 2015
22
1
1
I want to passthrough my /dev/sdb to a LXC container running OpenMediaVault.

Running
Code:
# lxc-device add -n 102 /dev/sdb
Seems to work perfectly, and /dev/sdb is visible in the container. Unfortunately this command is temporary and doesn't survive a reboot

The solution seems to be to use lxc.cgroup.devices.allow

I first needed to discover the major and minor ID's for my device:
Code:
# ls -al /dev/sdb
[B]b[/B]rw-rw---- 1 root disk [B]8[/B], [B]16[/B] Aug 14 21:02 /dev/sdb
b = This is a block device
8 is the major ID
16 is the minor ID

Adding this to the container conf:
Code:
# devices - set profile to allow mounting block devices (constrained by default)
lxc.aa_profile = lxc-container-default-with-mounting

# lxc.cgroup.devices.allow = typeofdevice majornumber:minornumber rwm
lxc.cgroup.devices.allow = b 8:16 rwm

After adding these two lines, the Proxmox GUI shows the error:
Invalid Key 'lxc.cgroup.devices.allow' (500)


  • What am I doing wrong?
  • Is this feature not implemented in Proxmox yet?
  • Is there a way to achieve persistent block device passthrough?
 
Thanks Artea - I think that line might also be required. But either way, Proxmox doesn't seem to like the lxc.cgroup.devices.allow line.

Has anyone been able to get this to work?
 
Proxmox doesn't seem to like the lxc.cgroup.devices.allow line.

I will upload a new version of our lxc container toolkit next week. It has an improved configuration system, where you can
basically add any lxc config line.
 
Ahh perfect. Thanks for letting me know. I will continue to plan for my system to use LXC rather than KVM then :)
 
Hi Spirit and Dietmar

Were you able to make the updates you mentioned? I've not seen any new updates available via apt-get. Should I be getting these updates in a different way?

Thanks, Matt
 
It's done it git repository (pve-container), but packages are not yet released.

I'll try to compile a package for your today for testing.

Thanks Spirit. Is there a schedule for package builds or are they just produced when there are enough changes in git?

M
 
>>Thanks Spirit. Is there a schedule for package builds or are they just produced when there are enough changes in git?

no special schedule, when they are enough changes, they are build.

They are big change in last pve-container package, the config format is not like qemu format, no more directly the lxc config.

 
I have updated to beta 2, but I still can't work how I should be getting this to work. Could somebody please explain where I should be adding custom LXC config statements?

Thanks! :)
 
Oh! - Silly me. Thanks for explaining!

I have added:
Code:
lxc.aa_profile = lxc-container-default-with-mounting
lxc.cgroup.devices.allow = b 8:16 rwm
lxc.cgroup.devices.allow = b 8:17 rwm

But I don't see anything with fdisk -l or ls /dev

Do you know how I should be attaching a physical disk so it can be mounted in OpenMediaVault?

Thanks

Matt
 
Ahhh - Perfect, thanks for the tip!

Using info from this site, I ran:
Code:
mknod -m 666 /dev/sda b 8 16
mknod -m 666 /dev/sda1 b 8 17

Now everything seems to be working

Thanks for all your help :)
 
Just another update for my own reference and in case it helps anyone...

The "mknod" command is not persistent, so it would need to be run each time the container is started.

To make things persistent you can use a script stored on the Proxmox host which is triggered by a setting in the container config file...

Create the following file
/var/lib/lxc/101/mount-hook.sh
Code:
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdb b 8 16
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdb1 b 8 17
Make your script executable
chmod 755 /var/lib/lxc/101/mount-hook.sh

Now, edit your container config file and add the following:
/etc/pve/lxc/101.conf
Code:
lxc.aa_profile: lxc-container-default-with-mounting
lxc.cgroup.devices.allow: b 8:16 rwm
lxc.cgroup.devices.allow: b 8:17 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/101/mount-hook.sh
Now the /dev/sda and /dev/sda1 nodes should be created each time the container starts.

Obviously you'll want to change a lot of the values I've used in this example, but hopefully this info is helpful.
 
  • Like
Reactions: dsi
@mjb2000: Thanks for this hint. It seems to work as expected. ;)

But one question. If I backup my lxc I see the following messages:

Code:
INFO: starting new backup job: vzdump 110 --mode snapshot --compress lzo --node proxmox --storage local --remove 0
INFO: Starting Backup of VM 110 (lxc)
INFO: status = running
INFO: CT Name: vdr
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: vdr
INFO: starting first sync /proc/13077/root// to /var/lib/vz/dump/vzdump-lxc-110-2017_03_03-01_47_47.tmp
INFO: Number of files: 39,439 (reg: 30,859, dir: 3,063, link: 5,485, dev: 2, special: 30)
INFO: Number of created files: 39,438 (reg: 30,859, dir: 3,062, link: 5,485, dev: 2, special: 30)
INFO: Number of deleted files: 0
INFO: Number of regular files transferred: 30,849
INFO: Total file size: 1,078,522,889 bytes
INFO: Total transferred file size: 1,073,541,138 bytes
INFO: Literal data: 1,073,541,138 bytes
INFO: Matched data: 0 bytes
INFO: File list size: 1,114,054
INFO: File list generation time: 0.001 seconds
INFO: File list transfer time: 0.000 seconds
INFO: Total bytes sent: 1,076,178,221
INFO: Total bytes received: 622,025
INFO: sent 1,076,178,221 bytes received 622,025 bytes 143,573,366.13 bytes/sec
INFO: total size is 1,078,522,889 speedup is 1.00
INFO: first sync finished (7 seconds)
INFO: suspend vm
INFO: starting final sync /proc/13077/root// to /var/lib/vz/dump/vzdump-lxc-110-2017_03_03-01_47_47.tmp
INFO: Number of files: 39,439 (reg: 30,859, dir: 3,063, link: 5,485, dev: 2, special: 30)
INFO: Number of created files: 0
INFO: Number of deleted files: 0
INFO: Number of regular files transferred: 0
INFO: Total file size: 1,078,522,889 bytes
INFO: Total transferred file size: 0 bytes
INFO: Literal data: 0 bytes
INFO: Matched data: 0 bytes
INFO: File list size: 196,573
INFO: File list generation time: 0.001 seconds
INFO: File list transfer time: 0.000 seconds
INFO: Total bytes sent: 1,058,837
INFO: Total bytes received: 3,314
INFO: sent 1,058,837 bytes received 3,314 bytes 2,124,302.00 bytes/sec
INFO: total size is 1,078,522,889 speedup is 1,015.41
INFO: final sync finished (0 seconds)
INFO: resume vm
INFO: vm is online again after 0 seconds
INFO: creating archive '/var/lib/vz/dump/vzdump-lxc-110-2017_03_03-01_47_47.tar.lzo'
INFO: Total bytes written: 1139056640 (1.1GiB, 278MiB/s)
INFO: archive file size: 644MB
INFO: Finished Backup of VM 110 (00:00:12)
INFO: Backup job finished successfully
TASK OK

I think the following message is related to my passed block device:

Code:
INFO: mode failure - some volumes do not support snapshots

I know it's only an information. But how is it possible to exclude the hdd from backup? Is there a similar way like it is within a kvm config (backup=0)?

Here is an example of my kvm "backup=0" configuration:

Code:
virtio2: /dev/disk/by-id/ata-Crucial_CT250MX200SSD1_1537108F211C,backup=0,size=244198584K

Thanks and greetings Hoppel
 
Last edited:
you'd need to post the container configuration in question (and please also include the output of pveversion -v). LXC mountpoints/volumes also have a backup flag (it's also available in the GUI).
 
Hello fabian,

here is my container configuration file:

Code:
arch: amd64
cores: 1
hostname: vdr
memory: 4096
net0: name=eth0,bridge=vmbr0,gw=10.11.11.1,hwaddr=0A:D8:58:1F:AA:A3,ip=10.11.11.12/24,type=veth
ostype: debian
rootfs: local:110/vm-110-disk-1.raw,size=4G
swap: 4096
lxc.aa_profile: lxc-container-default-with-mounting
lxc.cgroup.devices.allow: b 8:32 rwm
lxc.cgroup.devices.allow: b 8:33 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/110/mount-hook.sh
lxc.cgroup.devices.allow: c 212:* rwm
lxc.mount.entry: /dev/dvb dev/dvb none bind,optional,create=dir

"/var/lib/lxc/110/mount-hook.sh" looks as follows:

Code:
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdc b 8 32
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdc1 b 8 33

Code:
root@proxmox:~# ls -l /var/lib/lxc/110/mount-hook.sh
-rwxr-xr-x 1 root root 107 Mär  3 00:46 /var/lib/lxc/110/mount-hook.sh

It's the latest pveversion:

Code:
root@proxmox:~# pveversion -V
proxmox-ve: 4.4-82 (running kernel: 4.4.40-1-pve)
pve-manager: 4.4-12 (running version: 4.4-12/e71b7a74)
pve-kernel-4.4.40-1-pve: 4.4.40-82
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-109
pve-firmware: 1.1-10
libpve-common-perl: 4.0-92
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-1
pve-docs: 4.4-3
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-94
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-3
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80

fstab inside the container looks as follows:

Code:
# UNCONFIGURED FSTAB FOR BASE SYSTEM
/dev/sdc1               /mnt/timeshift/         ext4 defaults 0 2


I did it the way @mjb2000 described.

Greetings Hoppel
 
rootfs: local:110/vm-110-disk-1.raw,size=4G

this volume does not support snapshots, so the warning is correct - you cannot use snapshot mode, and vzdump automatically "downgrades" to suspend mode.

I am not sure why you are passing through a block device from the host like this, you can just use

Code:
mp0: /dev/sdc1,mp=/mnt/timeshift,backup=0

instead. of course, for the DVB device you still need the adapted AA profile and settings.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!