OpenMediaVault installation in LXC with attached HW block device

pmxforum

Well-Known Member
Jun 2, 2016
34
6
48
49
Target.
1. Install OpenMediaVault NAS into Debian 8 LXC container on ProxmoxVE server with hardware RAID controller card.
2. Provide to OpenMediaVault in LXC container as a storage LSI MegaRAID RAID1 - /dev/sda1.
3. Also continue to use /dev/sda1 as a backup storage for many backup scripts (mysqldump for example) from ProxmoxVE itself.

Definitions.

ProxmoxVE — latest official 4.3 release from http://www.proxmox.com/en/downloads with testing updates;
OpenMediaVault — latest 3.beta (3.0.47) from http://www.openmediavault.org with codename Erasmus;
OpenMediaVault plugins — plugins for Erasmus from http://omv-extras.org/joomla/;
Debian 8 (codename Jessie) LXC template — obtained from ProxmoxVE built-in LXC templates repository - debian-8.0-standard_8.4-1_amd64.tar.gz
*** Daily LXC Template Images from https://jenkins.linuxcontainers.org/view/LXC/view/LXC Templates/ for Debian will be also suitable, but up to 2016.10.21 it has a poor status.
LXC container config — file, located in /etc/pve/lxc/XXX.conf where XXX is a container number from ProxmoxVE.
LXC container hook script — bash shell script, located in /var/lib/lxc/XXX/<script-name>.sh where XXX is a container number from ProxmoxVE.

Decision.

Step 1 — installation of OpenMediaVault

1. update available containers list in ProxmoxVE shell: pveam update
2. download latest available Debian 8 template into container templates storage (via web-gui or from shell). I found debian-8.0-standard_8.4-1_amd64.tar.gz.
3. create Debian 8 LXC container with at least 1Gb RAM (2Gb as RAM will be much comfortable. But if You are planning to use ZFS — You need much more RAM and should consult with ZFS system requirements) and 2Gb RootFS. Also required at least one network device with Internet connection.
*** Notes for point 3:
You should add a few lines into LXC container config before it first start:
lxc.aa_profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw


In a few words — this lines required to start a Debian network scripts (eth0 netcard was not start until I’ve added unconfined profile, proc and sys mount for container), cgroup and unconfined profile required for OpenMediaVault services, such as nfs for example.

4. login into this container and update system:
apt-get update && apt-get dist-upgrade

5. add OpenMediaVault repository and start it installation
echo «deb http://packages.openmediavault.org/public erasmus main» > /etc/apt/sources.list.d/openmediavault.list
apt-get update
apt-get install openmediavault-keyring

* Don’t forget to answer «Y», and only after it push «ENTER»
apt-get update
apt-get install openmediavault


but our installation stopped with an error:
… Socket error – Connection refused
Cannot connect to the monit daemon. Did you start it with http support?
Failed to get D-Bus connection: Unknown error -1
dpkg: error processing package openmediavault (–configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
openmediavault
E: Sub-process /usr/bin/dpkg returned an error code (1)


5.1 Stop this container

5.2 We shoult mount LXC container virtual hard drive for example into /mnt directory in ProxmoxVE
mount /dev/mapper/pve-vm–102–disk–1 /mnt/

5.3 We should chroot into mounted LXC container virtual hard drive:
mount -t proc none /mnt/proc/
mount - - rbind /dev/ /mnt/dev/
mount - - rbind /sys/ /mnt/sys/
chroot /mnt/ /bin/bash

* This is a «true Gentoo way» ;-)

5.4 continue installation of OpenMediaVault in chroot
apt-get install openmediavault

5.5 start post-installation command
omv-initsystem
but our installation stopped with a first error:
run-parts: /usr/share/openmediavault/initsystem/20hostname exited with return code 1
Our container does not have it’s own hostmane, and in chroot we have hypervisor’s hostname …

You can simply change hostmane from OpenMediaVault Web Management interface, so simply move this file:
mv /usr/share/openmediavault/initsystem/20hostname /root/
continue firs time system initialization
omv-initsystem
but our installation stopped with a second error:
run-parts: /usr/share/openmediavault/initsystem/60rootfs exited with return code 2
LXC container has no fstab mounted rootfs. This is only a rootfs check in OpenMediaVault. In this case we can also simply drop this steb by mooving this file:
mv /usr/share/openmediavault/initsystem/60rootfs /root/
continue firs time system initialization
omv-initsystem
And after a few perl warnings about locales - finish successfully installation procedure.

5.6 exit from container by «ctrl+D»

5.7 start this container from ProxmoxVE

5.8 login into web-gui of OpenMediaVault
It seems to be all ok. But there is an error with applying changes — Avahi-daemon error …
After a few time googling I have found a solution for Avahi in https://loune.net/2011/02/avahi-setrlimit-nproc-and-lxc/.
We should patch the file - /usr/share/openmediavault/mkconf/avahi-daemon in container.
Go back to ProxmoxVE shell and edit file in pre-mounted rootfs:
nano /mnt/usr/share/openmediavault/mkconf/avahi-daemon
At the end of it we should to remove the last line:
rlimit-nproc=3
by
#rlimit-nproc=3

Our installation of OpenMediaVault is finished successfully.

Step 2 — installation of OpenMediaVault plugins

1. Go back to web-gui of OpenMediaVault

2. Download via your browser plugins from omv-extras.org:
http://omv-extras.org/debian/pool/m...org/openmediavault-omvextrasorg_3.3.3_all.deb
and install it via web-gui of OpenMediaVault.

This all for the installation.

We need to reboot the whole ProxmoxVE (to unmount /mnt/proc, /mnt/dev/ and /mnt/sys/)

Step 3 — provide block device (hard disk) from ProxmoxVE to OpenMediaVault in LXC container to operate it as a storage device.

I’m trying to attach /dev/sda1
This is my LSI MegaRAID mirrored volume.
I obtain a cooking receipt from here:
https://forum.proxmox.com/threads/lxc-cannot-assign-a-block-device-to-container.23256/#post-118361

1. in ProxmoxVE shell type this commands:

1.1 ls -la /dev/sda*
brw-rw—- 1 root disk 8, 0 Oct 20 20:12 /dev/sda
brw-rw—- 1 root disk 8, 1 Oct 20 20:12 /dev/sda1


1.2 edit LXC container config (with number XXX) to add a few new lines:
nano etc/pve/lxc/XXX.conf
lxc.cgroup.devices.allow: b 8:0 rwm
lxc.cgroup.devices.allow: b 8:1 rwm
lxc.autodev: 1

***
This lines allow to use /dev/sda, /dev/sda1 (with read-write-mount) inside container itself.
But our LXC container has none block devices in /dev
There is one way is to create device via mknod. But after container reboot we’ll loose those devices. We should create a hook.

1.3 In ProxmoxVE shell create this file:
nano /var/lib/lxc/XXX/mount-hook.sh
and add the following lines into it:
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda b 8 0
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda1 b 8 1

Don’t forget to make this file executable (chmod +x)

1.4. The last thing is to do — to add a lxc.mount line in container config.
To my own purposes, I mounted /dev/sda1 via fstab into /raid folder in ProxmoxVE. My own backup scripts operates in /raid/backup folder. OpenMediaVault automatically mounts devices into /media folder with device UUID folder.
In ProxmoxVE shell type:
ls -la /dev/disk/by-uuid | grep sda1
lrwxrwxrwx 1 root root 10 Oct 20 20:12 7078dfe1-70c5-46eb-97ec-cca6d2fcff37 → ../../sda1


As for me, I added this line to my LXC container config:
lxc.mount.entry: /raid media/7078dfe1-70c5-46eb-97ec-cca6d2fcff37 none bind,create=dir,optional 0 0

1.5. Stop and start LXC container to apply changes.
After that you can simply mount /dev/sda1 from OpenMediaVault web-gui and start normal operation of any services of OpenMediaVault codename Erasmus in LXC container as a NAS.

Voi la !!!

I tested successfully this services from OpenMediaVault in LXC container:
- nfs
- samba
- ftp
- ssh

Don’t forget to set correct user and group permissions in it ;-) This is the only one way for errors in OpenMediaVault as for me.

My own complete config file (/etc/pve/lxc/100.conf) is:

arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: omv
memory: 2048
nameserver: 192.168.80.1
net0: name=eth0,bridge=vmbr80,hwaddr=82:EA:BA:52:09:CD,ip=dhcp,type=veth
ostype: debian
rootfs: local-lvm:vm-100-disk-1,size=2G
searchdomain: omv
swap: 1024
lxc.aa_profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw
lxc.cgroup.devices.allow: b 8:0 rwm
lxc.cgroup.devices.allow: b 8:1 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/100/mount-hook.sh


My own complete hook file (/var/lib/lxc/100/mount-hook.sh) is:
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda b 8 0
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda1 b 8 1


My ProxmoxVE is:
pve-manager/4.3-6/460dfe4c (running kernel: 4.4.21-1-pve)

Hope that this short man will help a few people to save cost and start their own-made SOHO NAS, based on truly free software. This man is not suitable for production and/or commercial use as a NAS Solution.

P.S.
Take my pardon for my English.
 
Last edited:
Excellent, thank you! This was exactly what I was looking for.
The only caveat I encountered was that additionally the 40network file needed to be moved.
 
Thanks for the guide, worked for me.
Small problem here, click 'System Information' tab will give error


Error #0:
exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; blkid -o full 2>&1' with exit code '2': ' in /usr/share/php/openmediavault/system/process.inc:175
Stack trace:
#0 /usr/share/php/openmediavault/system/filesystem/backend/manager.inc(193): OMV\System\Process->execute(Array)
#1 /usr/share/php/openmediavault/system/filesystem/backend/manager.inc(116): OMV\System\Filesystem\Backend\Manager->enumerate()
#2 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(205): OMV\System\Filesystem\Backend\Manager->getBackendById('/dev/root')
#3 [internal function]: OMVRpcServiceFileSystemMgmt->enumerateMountedFilesystems(Array, Array)
#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)
#5 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('enumerateMounte...', Array, Array)
#6 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('FileSystemMgmt', 'enumerateMounte...', Array, Array, 1)
#7 {main}
 
Hi.
Thanks for the guide, worked for me.
Small problem here, click 'System Information' tab will give error


Error #0:
exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; blkid -o full 2>&1' with exit code '2': ' in /usr/share/php/openmediavault/system/process.inc:175
Stack trace:
#0 /usr/share/php/openmediavault/system/filesystem/backend/manager.inc(193): OMV\System\Process->execute(Array)
#1 /usr/share/php/openmediavault/system/filesystem/backend/manager.inc(116): OMV\System\Filesystem\Backend\Manager->enumerate()
#2 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(205): OMV\System\Filesystem\Backend\Manager->getBackendById('/dev/root')
#3 [internal function]: OMVRpcServiceFileSystemMgmt->enumerateMountedFilesystems(Array, Array)
#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)
#5 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('enumerateMounte...', Array, Array)
#6 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('FileSystemMgmt', 'enumerateMounte...', Array, Array, 1)
#7 {main}

I don't have any errors. What version of OMV do you have? I have 3.0.52 with latest plugins.
Did you comletely upgrade your Debian in container?

try to:
- before starting the container mount /sys, /proc/ and /dev to it's virtual rootfs from proxmox shell.
- start the container.
- update the whole system (not from web-gui).
- shutdown the container.
- umount /sys, /proc and /dev
- start the container and test it.

This hould help to resolve software errors in OMV.
 
Last edited:
Hello i am running ZFS here and not sure if this is ready for ZFS yet ?? noted a couple of changes along with moving the 40network file out of the way which i did have to do.

during Step 1

"3. You should add a few lines into LXC container config before it first start:
lxc.aa_profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw"


I am not sure if all these are needed i have added them all for now for a better chance at getting things working I may modify these later and report back here as i know that the profile line is not needed anymore to get networking working in my version of proxmox and the debian 8 container ... not exactly sure about the other lines yet

"5.2 We shoult mount LXC container virtual hard drive for example into /mnt directory in ProxmoxVE
mount /dev/mapper/pve-vm–102–disk–1 /mnt/"

I ignored this as i think the subvolumes are already mounted ??

"5.3 We should chroot into mounted LXC container virtual hard drive:
mount -t proc none /mnt/proc/
mount - - rbind /dev/ /mnt/dev/
mount - - rbind /sys/ /mnt/sys/
chroot /mnt/ /bin/bash

* This is a «true Gentoo way» ;-)"

I changed mine to match ZFS setup

mount -t proc none /apool/subvol-150-disk-1/proc/
mount -- rbind /dev/ /apool/subvol-150-disk-1/dev/
mount --rbind /dev/ /apool/subvol-150-disk-1/dev/
mount --rbind /sys/ /apool/subvol-150-disk-1/sys/
chroot /apool/subvol-150-disk-1/ /bin/bash

"5.8 login into web-gui of OpenMediaVault
It seems to be all ok. But there is an error with applying changes — Avahi-daemon error …
After a few time googling I have found a solution for Avahi in https://loune.net/2011/02/avahi-setrlimit-nproc-and-lxc/.
We should patch the file - /usr/share/openmediavault/mkconf/avahi-daemon in container.
Go back to ProxmoxVE shell and edit file in pre-mounted rootfs:
nano /mnt/usr/share/openmediavault/mkconf/avahi-daemon
At the end of it we should to remove the last line:
rlimit-nproc=3
by
#rlimit-nproc=3"

May not be needed anymore as i did not have to modify this maybe they finally fixed it upstream at OVM

During Step 3

Not sure the correct syntax to add the below to the container config ... so i just followed the original post, but it might be easier to do something similar to this maybe avoid the hook file too???

lxc config device add disk unix-block path=/dev/sda

Finally i see a block device and it is one disk of a ZFS mirror pool not sure if that is safe, and i do not seem to be able to see or add any filesystems to OVM, and it does not seem to have ZFS as a file system type??
 
Hello i am running ZFS here and not sure if this is ready for ZFS yet ?? noted a couple of changes along with moving the 40network file out of the way which i did have to do.

during Step 1

"3. You should add a few lines into LXC container config before it first start:
lxc.aa_profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw"


I am not sure if all these are needed i have added them all for now for a better chance at getting things working I may modify these later and report back here as i know that the profile line is not needed anymore to get networking working in my version of proxmox and the debian 8 container ... not exactly sure about the other lines yet

"5.2 We shoult mount LXC container virtual hard drive for example into /mnt directory in ProxmoxVE
mount /dev/mapper/pve-vm–102–disk–1 /mnt/"

I ignored this as i think the subvolumes are already mounted ??

"5.3 We should chroot into mounted LXC container virtual hard drive:
mount -t proc none /mnt/proc/
mount - - rbind /dev/ /mnt/dev/
mount - - rbind /sys/ /mnt/sys/
chroot /mnt/ /bin/bash

* This is a «true Gentoo way» ;-)"

I changed mine to match ZFS setup

mount -t proc none /apool/subvol-150-disk-1/proc/
mount -- rbind /dev/ /apool/subvol-150-disk-1/dev/
mount --rbind /dev/ /apool/subvol-150-disk-1/dev/
mount --rbind /sys/ /apool/subvol-150-disk-1/sys/
chroot /apool/subvol-150-disk-1/ /bin/bash

"5.8 login into web-gui of OpenMediaVault
It seems to be all ok. But there is an error with applying changes — Avahi-daemon error …
After a few time googling I have found a solution for Avahi in https://loune.net/2011/02/avahi-setrlimit-nproc-and-lxc/.
We should patch the file - /usr/share/openmediavault/mkconf/avahi-daemon in container.
Go back to ProxmoxVE shell and edit file in pre-mounted rootfs:
nano /mnt/usr/share/openmediavault/mkconf/avahi-daemon
At the end of it we should to remove the last line:
rlimit-nproc=3
by
#rlimit-nproc=3"

May not be needed anymore as i did not have to modify this maybe they finally fixed it upstream at OVM

During Step 3

Not sure the correct syntax to add the below to the container config ... so i just followed the original post, but it might be easier to do something similar to this maybe avoid the hook file too???

lxc config device add disk unix-block path=/dev/sda

Finally i see a block device and it is one disk of a ZFS mirror pool not sure if that is safe, and i do not seem to be able to see or add any filesystems to OVM, and it does not seem to have ZFS as a file system type??


Hi.
I have no experience with ZFS.

Yesterday I tested to add mount point (lvm thin volume with ext4) to omv lxc container. I need to use it as ftpfs.
I mounted it to container as /dev/vda1.
Every thing is ok:
omv recognize it, successfully mounted and used it, until lxc reboot.
OMV uses fstab to mount storage. I'v found no way to start container's fstab automatically on boot.
I created systemd stupid service file with only one command "mount -a".
After that I successfully reboot lxc with omv with automatically mounted storage.
You can use any virtual or physical block device with omv as it's storage inside lxc.
simple description of providing lvm volume as a storage volume to OMV.
As for me:
- I created mount point with 2Gb, mounted to /media/mp0 (any path as you wish)
- proxmox indicate that it is local-lvm:vm-105-disk-2
to find actual /dev/dm-XX nuber you need to ls -la /dev/pve/vm-105-disk-*
lrwxrwxrwx 1 root root 8 Mar 11 13:28 /dev/pve/vm-105-disk-1 -> ../dm-36
lrwxrwxrwx 1 root root 8 Mar 11 13:28 /dev/pve/vm-105-disk-2 -> ../dm-38
After that I add to mount-hook.sh 2 lines:
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/vda b 251 38
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/vda1 b 251 38
after start of omv you can see /dev/vda1 wit ext4 in filesystems tab from web-ui of omv.
you can mount it and start operate.
Stupid service to mount all devices, pointed in omv fstab file is:
-----------
[Unit]
Description=Stupid Storage Mount for OMV in lxc

[Service]
Type=oneshot
ExecStart=/bin/mount -a

[Install]
WantedBy=multi-user.target
---------------

There is no matter about physical drive or virtual drive as a storage for OMV. If you correctly create block device in lxc's /dev directory.
it should have /dev/sdx and /dev/sdx1. As you see above, it is possible to use same block device for all 2 nodes.
To use zfs in omv you need to provide correct partition as storage to omv and install zfs plugin from omv-extras.org.
Hope this will be enough.
To correctly install and update plugins read this: - http://forum.openmediavault.org/index.php/Thread/14931-No-key-found-for-using-apt-get-upgrades/
 
Hi.

As for me:
- I created mount point with 2Gb, mounted to /media/mp0 (any path as you wish)
- proxmox indicate that it is local-lvm:vm-105-disk-2

hello i am a newbie and i follow your post try to make a home nas
but i dont know how to do this step

can you tell more detail
thanks

sorry for my bad english
 
hello i am a newbie and i follow your post try to make a home nas
but i dont know how to do this step

can you tell more detail
thanks

sorry for my bad english


IIn a few days I'll make more simple guide for this installation and inform you about it.
 
  • Like
Reactions: nabula
So, in proxmox LXC's you can simply to grow main (root) volume of container, or add another volume (additional virtual storage) as a "mount point". for example I use "/var/lib/mysql" as a 4Gb mount point for mysql databases. But the Mount Point volume mounts only with root:root permissions. You shoud mention this note.

In current release of OMV (including arrakis) path /media/mp0 for mounting storage devices changed to /srv/dev-xxxx (where xxxx is sda1 for example).

So if you are planning to use another virtual volume in your's container as a storage for OMV, you should use this path for mount.
Example (I mentioned, that omv's container has id 110 in proxmox gui).
1. create mount point for a container number 110. Select "Resorces" and push "Add" button - there will be only one choice "Mount Piont"
2. select desired size, target storage, select "Backup" (if you vant to include it for backup operation) and choose "/mnt/dev-sda1" as Path.
Don't use /srv/dev-sda1 as a path, because this path uses OMV itself for mountig of /dev/sda1
3. Go to Proxmox Shell ant investigate needed system ID's of virtual disks for a container with number 110:
ls -la /dev/pve/*110*
lrwxrwxrwx 1 root root 8 Sep 3 08:11 /dev/pve/sas-vm--110--disk--1 -> ../dm-83
lrwxrwxrwx 1 root root 8 Sep 3 08:21 /dev/pve/sas-vm--110--disk--2 -> ../dm-84
after that you need to: ls -la /dev/dm-84 to investigete "block device ID" - here this number is 253.
brw-rw---- 1 root disk 253, 84 Sep 3 08:22 /dev/dm-84

This is indicates that 1st (root) virtual partition is /dev/dm-83 and your's second virtual partition (mounted as /mnt/dev-sda1in your rootfs partition) is /dev/dm-84.
4. there is no block devices (hard drives as for you) in lxs cotainers. You should crteate it before. This can be done by creating hook.
Your custom hook file for a container with number 110 should be placed in /var/lib/lxc/110/ folder.

If you are planning to use NFS server in OMV, you should also:
- install nfs-kernel-server in proxmox
- investigate ID of /dev/fuse
ls -la /dev/fuse
crw-rw-rw- 1 root root 10, 229 Sep 2 08:04 /dev/fuse
- add additionel mknod line for moun-hook.sh for /dev/fuse

/var/lib/lxc/110/mount-hook.sh said:
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda b 253 84
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda1 b 253 84
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/fuse c 10 229

This lines mean:
creating of /dev/vda, /dev/vda1 and /dev/fuse inside /dev directory in container with ID 110.

Don'f forget to set X attribute of mount-hook.sh by this command: chmod +x /var/lib/lxc/110/mount-hook.sh

Also you should add additional lines to the container config file in /etc/pve/lxc/110.conf
/etc/pve/lxc/110.conf said:
...
lxc.aa_profile: unconfined
lxc.cgroup.devices.allow: b 8:84 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/110/mount-hook.sh

after these manipulations you can see /dev/sda, /dev/sda1 and /dev/fuse inside container with number 110.

After that you can see /dev/sda1 in OMV web-gui and you need to mount it. But after reboot you miss mounted partition, because LXC doesn't execute /etc/fstab. In this case I created mentioned above stupid service file for mounting. But to be correct, there should be a section for automatic umount. But I'm not so close for systemd to insert tis option to service file. Due to missing of this section, container reboots about 1 minute long. As far as I understood, during this time LXC daemon waiting for abnormal termination of "inside container mounting" procedure.

sorry for my bad English also.
 
  • Like
Reactions: nabula
I have hit the same issue as in the first post, but before I proceed with making it work I was just wondering if anyone considered just running it as a virtual machine instead of container? The purist in me would like to use a container, but the hacks required seem to be a little high maintenance perhaps.

My requirements are simple, as I have an existing lvm/mdadm set - either I pass block devices for software raid in OMV, or I create the raid in Proxmox and mount folders to the OMV container. Whichever is easiest :)

Would love to hear thoughts.
 
I made everything to your tutorial, except I used OMV 4.0 on debian 9.3 lxc container.
Everything works great except I have stuck at last point: make hook to mount data drives (sdb2 and sdc2)
I have got this:
Code:
The configuration file contains legacy configuration keys.
Please update your configuration file!
and this is my container config file (/etc/pve/lxc/108.conf):
Code:
arch: amd64
cores: 4
hostname: omv
memory: 2048
net0: name=eth0,bridge=vmbr0,gw=192.168.0.1,hwaddr=5A:8A:F0:4B:10:6F,ip=192.168.0.7/24,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-108-disk-1,size=8G
swap: 2048
lxc.aa_profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw
lxc.cgroup.devices.allow: b 8:16 rwm
lxc.cgroup.devices.allow: b 8:17 rwm
lxc.cgroup.devices.allow: b 8:18 rwm
lxc.cgroup.devices.allow: b 8:32 rwm
lxc.cgroup.devices.allow: b 8:33 rwm
lxc.cgroup.devices.allow: b 8:34 rwm
lxc.autodev: 1
lxc.mount.entry: /media/11772968-9c22-4652-8f30-96b0e5a16a94 media/11772968-9c22-4652-8f30-96b0e5a16a94 none bind,create=dir,optional 0 0
lxc.mount.entry: /media/60864452-f794-4ef7-bd58-a2ec42f092e5 media/60864452-f794-4ef7-bd58-a2ec42f092e5 none bind,create=dir,optional 0 0
are some config variables are not anymore valid?
thanks for any help
 
Hi. You should change a line with :
lxc.aa_profile: unconfined
to a new syntax:
lxc.apparmor.profile: unconfined

in container config file.
 
I have made the change and now I don't see any errors, but unfortunately in OMV container I don't see any drives ...
Any suggestions?
 
I have made the change and now I don't see any errors, but unfortunately in OMV container I don't see any drives ...
Any suggestions?

Hi. do you see 'mapped' drive inside countainer with "ls -la /dev/sd*" ?

If not, tomorrow morning I'll rewrite above howto to adapt it to debian stretch and latest omv versions. There is a few differencies.
 
nope:
Code:
root@omv:~# ls -la /dev/sd*
ls: cannot access '/dev/sd*': No such file or directory
but the're mounted on host:
Code:
root@serwer:~# ls -la /dev/sd*
brw-rw---- 1 root disk 8,  0 Jan  5 19:30 /dev/sda
brw-rw---- 1 root disk 8,  1 Jan  5 19:30 /dev/sda1
brw-rw---- 1 root disk 8,  2 Jan  5 19:30 /dev/sda2
brw-rw---- 1 root disk 8,  3 Jan  5 19:30 /dev/sda3
brw-rw---- 1 root disk 8, 16 Jan  5 19:30 /dev/sdb
brw-rw---- 1 root disk 8, 17 Jan  5 19:30 /dev/sdb1
brw-rw---- 1 root disk 8, 18 Jan  5 19:30 /dev/sdb2
brw-rw---- 1 root disk 8, 32 Jan  5 19:30 /dev/sdc
brw-rw---- 1 root disk 8, 33 Jan  5 19:30 /dev/sdc1
brw-rw---- 1 root disk 8, 34 Jan  5 19:30 /dev/sdc2
and
Code:
root@serwer:~# mount | grep /media
/dev/sdb2 on /media/11772968-9c22-4652-8f30-96b0e5a16a94 type ext4 (rw,relatime,data=ordered)
/dev/sdc2 on /media/60864452-f794-4ef7-bd58-a2ec42f092e5 type ext4 (rw,relatime,data=ordered)
 
Hi.
You didn't created via hook correctly /dev/sd*

If you need to attach /dev/sdb2 and /dev/sdc2, you should attach:
/dev/sdb
/dev/sdb1
/dev/sdb2
/dev/sdc
/dev/sdc1
/dev/sdc2

<container_id>.conf
.....
lxc.cgroup.devices.allow: b 8:16 rwm
lxc.cgroup.devices.allow: b 8:17 rwm
lxc.cgroup.devices.allow: b 8:18 rwm
lxc.cgroup.devices.allow: b 8:32 rwm
lxc.cgroup.devices.allow: b 8:33 rwm
lxc.cgroup.devices.allow: b 8:34 rwm
.....
<mount-hook.sh>
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdb b 8 16
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdb1 b 8 17
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdb2 b 8 18
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdc b 8 32
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdc1 b 8 33
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sdc2 b 8 34
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/fuse c 10 229

Don't forget to check in container config:
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/<your_containet_id>/mount-hook.sh

And chmod +x /var/lib/lxc/<your_containet_id>/mount-hook.sh

Hope this will solve ".. not found" inside your container.
 
  • Like
Reactions: vobo70
Hi,

I managed to install omv4 successfully according to the steps above.

But I'm unable to get the temperature under the Web-GUI Storage SMART devices display. It shown as "n/a" for the harddisk temperature.

Seems like the /dev/sda has some permission restriction in the container required by smartctl to read the harddisk info:-
Read Device Identity failed: Operation not permitted
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options

Appreciated if anyone could advise. Thanks
 
root@pve1:~# nano /etc/pve/lxc/101.conf
lxc.apparmor.profile: unconfined
lxc.mount.auto: cgroup:rw
lxc.mount.auto: proc:rw
lxc.mount.auto: sys:rw
lxc.cgroup.devices.allow: b 8:0 rwm
lxc.cgroup.devices.allow: b 8:1 rwm
lxc.cgroup.devices.allow: b 8:2 rwm
lxc.cgroup.devices.allow: b 8:3 rwm
lxc.cgroup.devices.allow: b 8:4 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/101/mount-hook.sh

root@pve1:~# nano /var/lib/lxc/101/mount-hook.sh
#!/bin/sh
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda b 8 0
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda1 b 8 1
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda2 b 8 2
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda3 b 8 3
mknod -m 777 ${LXC_ROOTFS_MOUNT}/dev/sda4 b 8 4

root@pve1:~# chmod +x /var/lib/lxc/101/mount-hook.sh

root@pve1:~# ls /dev/sd* -l

brw-rw---- 1 root disk 8, 0 Jan 21 23:30 /dev/sda
brw-rw---- 1 root disk 8, 1 Jan 21 23:30 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jan 21 23:30 /dev/sda2
brw-rw---- 1 root disk 8, 3 Jan 21 23:30 /dev/sda3
brw-rw---- 1 root disk 8, 4 Jan 21 23:30 /dev/sda4

root@pve1:~# smartctl -a /dev/sda
...
190 Airflow_Temperature_Cel 0x0032 069 057 000 Old_age Always 31
...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!