Hi everyone,
I’m running into an issue with OpenEBS Mayastor Replicated Engine when trying to attach a PVC on a K3s node running inside an LXC container.
The problem may stem from the Replicated Engine’s reliance on NVMe-oF (nvme-tcp and nvme-fabrics kernel modules). Since /dev is isolated in LXC containers, the container cannot access or mount the virtual NVMe drives. There’s also a possibility that something in my container configuration is missing or misconfigured, but I’m not certain.
Here is my host and environment setup:
The relevant logs show the following errors:
In the logs show the virtual nvme device dosent exist, the io-engine tried to connect/mount pvc, but it seems disconnect after a few seconds and stay in this loop.
I suspect the root of the problem is that LXC containers cannot request the host to create virtual NVMe devices in /dev due to device isolation.
For comparison, here is the lsblk output from a bare-metal node where the virtual NVMe drive is successfully created:
I also tried adding to the lxc config the lxc.cgroup2.devices.allow: b 259:* to allow access to block devices access but didn't solve the problem.
Bellow is lxc configuration file:
Also here you have the yaml for the DiskPool and StorageClass:
I’m running into an issue with OpenEBS Mayastor Replicated Engine when trying to attach a PVC on a K3s node running inside an LXC container.
The problem may stem from the Replicated Engine’s reliance on NVMe-oF (nvme-tcp and nvme-fabrics kernel modules). Since /dev is isolated in LXC containers, the container cannot access or mount the virtual NVMe drives. There’s also a possibility that something in my container configuration is missing or misconfigured, but I’m not certain.
Here is my host and environment setup:
- Proxmox VE version: 9.0.10 (Kernel: 6.8.12-9-pve)
- LXC container: Ubuntu 25.04 template
- K3s version: v1.32.6+k3s1
- Storage engine: OpenEBS Mayastor (openebs-two-replicas)
- Workload: Simple Pod + PVC using the openebs-two-replicas StorageClass
Description of the Problem
When a Pod attempts to mount a PVC provisioned by openebs-two-replicas, the LXC container node fails to attach the volume.The relevant logs show the following errors:
Bash:
## Pod describe
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedAttachVolume 23s attachdetach-controller Multi-Attach error for volume "pvc-4ff18aa2-94d8-42ec-b59d-63823a76c3ff" Volume is already exclusively attached to one node and can't be attached to another
Normal SuccessfulAttachVolume 20s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-4ff18aa2-94d8-42ec-b59d-63823a76c3ff"
Warning FailedMount 9s (x5 over 18s) kubelet MountVolume.MountDevice failed for volume "pvc-4ff18aa2-94d8-42ec-b59d-63823a76c3ff" : rpc error: code = Internal desc = Failed to stage volume 4ff18aa2-94d8-42ec-b59d-63823a76c3ff: error preparing device /dev/nvme1n1: mkfs.ext4 command failed: mke2fs 1.46.2 (28-Feb-2021)
The file /dev/nvme1n1 does not exist and no size was specified.
## io-engine logs
[2025-10-26T19:43:38.463796854+00:00 INFO io_engine::subsys::nvmf::subsystem:subsystem.rs:777] Subsystem start in progress... self=NvmfSubsystem { id: 1, subtype: "NVMe", subnqn: "nqn.2019-05.io.openebs:b7266e57-cb11-4072-a594-2209bb
b08c6f", sn: "DCS1464FA6E7ED26F057", mn: "Mayastor NVMe controller", allow_any_host: false, ana_reporting: 0, listeners: Some([Transport ID { trtype: 3, trstring: "TCP", traddr: "10.0.10.11", trsvcid: "8420" }]) }
[2025-10-26T19:43:38.463898453+00:00 INFO io_engine::subsys::nvmf::subsystem:subsystem.rs:826] Subsystem start completed: Ok self=NvmfSubsystem { id: 1, subtype: "NVMe", subnqn: "nqn.2019-05.io.openebs:b7266e57-cb11-4072-a594-2209bbb
08c6f", sn: "DCS1464FA6E7ED26F057", mn: "Mayastor NVMe controller", allow_any_host: false, ana_reporting: 0, listeners: Some([Transport ID { trtype: 3, trstring: "TCP", traddr: "10.0.10.11", trsvcid: "8420" }]) }
[2025-10-26T19:43:38.465504146+00:00 INFO io_engine::lvs::lvs_lvol:lvs_lvol.rs:173] Lvol 'filepool-1/4851ea42-11a9-40db-a3a3-6686e374e368/b7266e57-cb11-4072-a594-2209bbb08c6f' [5.00 GiB]: shared as NVMF
[2025-10-26T19:43:38.497404067+00:00 INFO io_engine::subsys::nvmf::subsystem:subsystem.rs:368] Host 'nqn.2019-05.io.openebs:node-name:master2-k3s' connected to subsystem 'nqn.2019-05.io.openebs:b7266e57-cb11-4072-a594-2209bbb08c6f' o
n replica 'Lvol 'filepool-1/4851ea42-11a9-40db-a3a3-6686e374e368/b7266e57-cb11-4072-a594-2209bbb08c6f' [5.00 GiB]'
[2025-10-26T20:08:00.767128664+00:00 INFO io_engine::subsys::nvmf::subsystem:subsystem.rs:296] Host 'nqn.2019-05.io.openebs:node-name:master1-k3s' connected to subsystem 'nqn.2019-05.io.openebs:4ff18aa2-94d8-42ec-b59d-63823a76c3ff' o
n nexus 'Nexus '4ff18aa2-94d8-42ec-b59d-63823a76c3ff' [open]'
[2025-10-26T20:08:01.140329178+00:00 INFO io_engine::subsys::nvmf::subsystem:subsystem.rs:316] Host 'nqn.2019-05.io.openebs:node-name:master1-k3s' disconnected from subsystem 'nqn.2019-05.io.openebs:4ff18aa2-94d8-42ec-b59d-63823a76c3
ff' on nexus 'Nexus '4ff18aa2-94d8-42ec-b59d-63823a76c3ff' [open]'
[2025-10-26T20:08:01.749064584+00:00 INFO io_engine::subsys::nvmf::subsystem:subsystem.rs:296] Host 'nqn.2019-05.io.openebs:node-name:master1-k3s' connected to subsystem 'nqn.2019-05.io.openebs:4ff18aa2-94d8-42ec-b59d-63823a76c3ff' o
n nexus 'Nexus '4ff18aa2-94d8-42ec-b59d-63823a76c3ff' [open]'
In the logs show the virtual nvme device dosent exist, the io-engine tried to connect/mount pvc, but it seems disconnect after a few seconds and stay in this loop.
I suspect the root of the problem is that LXC containers cannot request the host to create virtual NVMe devices in /dev due to device isolation.
For comparison, here is the lsblk output from a bare-metal node where the virtual NVMe drive is successfully created:
Bash:
nvme1n1 259:7 0 5G 0 disk /var/lib/kubelet/pods/31bc84a6-1cfa-4bb6-85c2-6c70299c0b91/volume-subpaths/pvc-e1ad08a2-f2e7-4d4b-94d4-568cf2351250/sharelatex/4
/var/lib/kubelet/pods/31bc84a6-1cfa-4bb6-85c2-6c70299c0b91/volume-subpaths/pvc-e1ad08a2-f2e7-4d4b-94d4-568cf2351250/sharelatex/3
/var/lib/kubelet/pods/31bc84a6-1cfa-4bb6-85c2-6c70299c0b91/volume-subpaths/pvc-e1ad08a2-f2e7-4d4b-94d4-568cf2351250/sharelatex/2
/var/lib/kubelet/pods/31bc84a6-1cfa-4bb6-85c2-6c70299c0b91/volume-subpaths/pvc-e1ad08a2-f2e7-4d4b-94d4-568cf2351250/sharelatex/1
/var/lib/kubelet/pods/31bc84a6-1cfa-4bb6-85c2-6c70299c0b91/volumes/kubernetes.io~csi/pvc-e1ad08a2-f2e7-4d4b-94d4-568cf2351250/mount
/var/lib/kubelet/plugins/kubernetes.io/csi/io.openebs.csi-mayastor/2534e5968ec530ac40fe4dcb1bba9a6c9bdfc33fa0895af9d9fc36f15e925e4d/globalmount
nvme2n1 259:9 0 5G 0 disk /var/lib/kubelet/pods/7762dffc-e3db-4f8a-ae66-27f5ebf610ff/volume-subpaths/pvc-eafb3067-c6bd-4386-afd4-40524b027703/jellyseerr/0
/var/lib/kubelet/pods/7762dffc-e3db-4f8a-ae66-27f5ebf610ff/volume-subpaths/pvc-eafb3067-c6bd-4386-afd4-40524b027703/jellyfin/0
/var/lib/kubelet/pods/7762dffc-e3db-4f8a-ae66-27f5ebf610ff/volumes/kubernetes.io~csi/pvc-eafb3067-c6bd-4386-afd4-40524b027703/mount
/var/lib/kubelet/plugins/kubernetes.io/csi/io.openebs.csi-mayastor/b5278d662d606268c4790fca1647d0364d02a2d25ce1d21bd59ac23c08540b6f/globalmount
I also tried adding to the lxc config the lxc.cgroup2.devices.allow: b 259:* to allow access to block devices access but didn't solve the problem.
Bellow is lxc configuration file:
INI:
root@pve-hp:~cat /etc/pve/lxc/101.conf
arch: amd64
cores: 4
dev0: /dev/sr0
hostname: master1-k3s
memory: 16384
nameserver: 10.0.10.1
net0: name=mgmt,bridge=vmbr0,gw=10.0.10.1,hwaddr=bc:24:11:41:ce:c6,ip=10.0.10.11/24,ip6=dhcp,tag=10,type=veth
net1: name=dmz,bridge=vmbr0,firewall=1,gw=10.0.200.1,hwaddr=bc:24:11:dc:57:40,ip=10.0.200.11/24,ip6=dhcp,tag=200,type=veth
net2: name=iot,bridge=vmbr0,firewall=1,gw=10.0.150.1,hwaddr=BC:24:11:44:76:55,ip=10.0.150.11/24,ip6=auto,tag=150,type=veth
onboot: 0
ostype: ubuntu
rootfs: local-zfs:subvol-101-disk-1,mountoptions=discard;noatime,replicate=0,size=250G
searchdomain: home localdomain
startup: order=3
swap: 0
lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.cgroup.devices.allow: c 21:* rwm
lxc.cgroup2.devices.allow: c 10:240 rwm
lxc.cgroup2.devices.allow: c 10:196 rwm
lxc.cgroup2.devices.allow: b 259:* rwm
lxc.mount.entry: /dev/sg0 dev/sg0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir 0 0
lxc.mount.entry: /dev/dri/renderD128 dev/renderD128 none bind,optional,create=file 0 0
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
lxc.mount.entry: /dev/nvme-fabrics dev/nvme-fabrics none bind,optional,create=file #passthouth host nvme-fabrics device
lxc.mount.entry: /dev/vfio dev/vfio none bind,optional,create=dir #openbs io-egine need vfio
Also here you have the yaml for the DiskPool and StorageClass:
YAML:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-two-replicas
parameters:
protocol: nvmf
repl: '2'
provisioner: io.openebs.csi-mayastor
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: openebs.io/v1beta3
kind: DiskPool
metadata:
name: filepool-1
namespace: openebs
spec:
disks:
- aio:///var/local/openebs/io-engine/disk1.img?blk_size=4096
node: master1-k3s
---
apiVersion: openebs.io/v1beta3
kind: DiskPool
metadata:
name: filepool-2
namespace: openebs
spec:
disks:
- aio:///var/local/openebs/io-engine/disk1.img?blk_size=4096
node: master2-k3s # bare-metal node
Last edited: