unable to activate storage - directory is expected to be a mount point but is not mounted

skythanil

New Member
Feb 21, 2024
5
1
3
Hello!

I am an admitted newbie here so forgive my general ignorance! I have Proxmox VE running with Docker in an LXC (on reflection I should have just ran my docker containers as LXCs but we are where we are!), with an external hard drive as a media drive. This has been operating mostly smoothly for the past 6 months or so. I was doing a bit of office reorganisation earlier today, and after shutting down and eventually rebooting the server, I'm now having issues with the mount for the external HDD.

The mount was set up through the Proxmox GUI, and when I boot the server up it all appears normally without any errors. I can see CT volumes and templates and stats about drive usage etc. without issue. However once the Docker LXC starts, the mounted HDD becomes unavailable with the error 'unable to activate storage - directory is expected to be a mount point but is not mounted'.

Capture.PNG


I can only assume something about the interaction between the Docker LXC and the mounted HDD is causing the issue. Nothing has changed about the configuration since the server rebooted and broke the mount, and I'm running up against my lack of know-how in trying to fix it!

This is how the container/mount point is configured for the LXC:

Capture2.PNG

Capture3.PNG

I'm mostly concerned with preserving the data on the HDD if possible, as I know it can be easy to lose data if you're not careful with this kind of thing.

Below are some of the shell outputs that I've had in trying to diagnose the issue, in case these are of any help.

Thanks in advance!

Code:
root@Home:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 447.1G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0     1G  0 part
└─sda3                         8:3    0 446.1G  0 part
  ├─pve-swap                 253:0    0   7.7G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   3.3G  0 lvm 
  │ └─pve-data-tpool         253:4    0 319.9G  0 lvm 
  │   ├─pve-data             253:5    0 319.9G  1 lvm 
  │   ├─pve-vm--100--disk--0 253:6    0     4M  0 lvm 
  │   └─pve-vm--100--disk--2 253:7    0    32G  0 lvm 
  └─pve-data_tdata           253:3    0 319.9G  0 lvm 
    └─pve-data-tpool         253:4    0 319.9G  0 lvm 
      ├─pve-data             253:5    0 319.9G  1 lvm 
      ├─pve-vm--100--disk--0 253:6    0     4M  0 lvm 
      └─pve-vm--100--disk--2 253:7    0    32G  0 lvm 
sdb                            8:16   0   1.8T  0 disk
└─sdb1                         8:17   0   1.8T  0 part /mnt/pve/Storage

Code:
root@Home:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup


lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images


dir: Storage
        path /mnt/pve/Storage
        content backup,images,snippets,rootdir,vztmpl,iso
        is_mountpoint 1
        nodes Home

Code:
root@Home:~# systemctl status mnt-pve-Storage.mount
● mnt-pve-Storage.mount - Mount storage 'Storage' under /mnt/pve
     Loaded: loaded (/etc/systemd/system/mnt-pve-Storage.mount; enabled; preset: enabled)
     Active: active (mounted) since Wed 2024-02-21 03:40:45 GMT; 11min ago
      Where: /mnt/pve/Storage
       What: /dev/sdb1
      Tasks: 0 (limit: 9318)
     Memory: 8.0K
        CPU: 7ms
     CGroup: /system.slice/mnt-pve-Storage.mount


Feb 21 03:40:44 Home systemd[1]: Mounting mnt-pve-Storage.mount - Mount storage 'Storage' under /mnt/pve...
Feb 21 03:40:45 Home systemd[1]: Mounted mnt-pve-Storage.mount - Mount storage 'Storage' under /mnt/pve.

Code:
Disk /dev/sdb: 1.82 TiB, 2000398933504 bytes, 3907029167 sectors
Disk model: Expansion      
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 2B890620-F14F-4821-8A01-20F3E364D2D0


Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 3907029133 3907027086  1.8T Linux filesystem

Code:
root@Home:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
#/dev/sdb1
UUID=2198105b-0336-4937-8919-95210637874d /mnt/pve/Storage xfs defaults 0 0
 
Last edited:
However once the Docker LXC starts, the mounted HDD becomes unavailable with the error 'unable to activate storage - directory is expected to be a mount point but is not mounted'.
Docker in LXC is an unsupported configuration, since it is known to cause (subtle) problems as Docker likes to do weird stuff. There are a lot of threads on the forum about that, as well as it's mentioned in the admin guide.
So I suggest to migrate all your Docker stuff to VMs or - as you suggested yourself - use LXCs directly :)

Anyway, since this only happens once the LXC is started - could you also share the output of pct config 103, with 103 being the ID of the misbehaved CT I assume.

Also, can you start the container using pct start 103 -debug and share the output? That will provide a lot more information about what is happening. You can also look into the system log using journalctl -b afterwards, if there is anything that might be related.

I'm mostly concerned with preserving the data on the HDD if possible, as I know it can be easy to lose data if you're not careful with this kind of thing.
As long as you don't create a new storage, delete anything, wipe it etc. on it, it should generally be fine.
 
Docker in LXC is an unsupported configuration, since it is known to cause (subtle) problems as Docker likes to do weird stuff. There are a lot of threads on the forum about that, as well as it's mentioned in the admin guide.
So I suggest to migrate all your Docker stuff to VMs or - as you suggested yourself - use LXCs directly :)

Anyway, since this only happens once the LXC is started - could you also share the output of pct config 103, with 103 being the ID of the misbehaved CT I assume.

Also, can you start the container using pct start 103 -debug and share the output? That will provide a lot more information about what is happening. You can also look into the system log using journalctl -b afterwards, if there is anything that might be related.


As long as you don't create a new storage, delete anything, wipe it etc. on it, it should generally be fine.


Thanks Christoph! Assuming this can be resolved, my first move will be to migrate everything over to a VM or LXCs!

Here is the outpf pct config 103:

Code:
root@Home:~# pct config 103
arch: amd64
cores: 2
description: # Docker LXC%0A  ### https%3A//tteck.github.io/Proxmox/%0A  <a href='https%3A//ko-fi.com/D1D7EP4GF'><img src='https%3A//img.shields.io/badge/%E2%98%95-Buy me a coffee-red' /></a>%0A
features: keyctl=1,nesting=1
hostname: docker
memory: 2048
mp0: Storage:103/vm-103-disk-1.raw,mp=/mnt/Storage,backup=1,size=1000G
net0: name=eth0,bridge=vmbr0,hwaddr=12:70:52:F9:8E:0B,ip=dhcp,type=veth
onboot: 0
ostype: debian
rootfs: Storage:103/vm-103-disk-0.raw,size=1000G
swap: 512
tags: proxmox-helper-scripts
unprivileged: 1

Running the container in debug mode:

Code:
root@Home:~# pct start 103 -debug
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "103", config section "lxc"
DEBUG    seccomp - ../src/lxc/seccomp.c:parse_config_v2:656 - Host native arch is [3221225534]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "open_by_handle_at errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[304:open_by_handle_at] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "init_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "finit_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "delete_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "ioctl errno 1 [1,0x9400,SCMP_CMP_MASKED_EQ,0xff00]"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[16:ioctl] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context
INFO     start - ../src/lxc/start.c:lxc_init:881 - Container "103" is initialized
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1391 - The monitor process uses "lxc.monitor/103" as cgroup
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1499 - The container process uses "lxc/103/ns" as inner and "lxc/103" as limit cgroup
INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWUSER
INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWNS
INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWPID
INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWUTS
INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWIPC
INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWCGROUP
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved user namespace via fd 17 and stashed path as user:/proc/86723/fd/17
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved mnt namespace via fd 18 and stashed path as mnt:/proc/86723/fd/18
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved pid namespace via fd 19 and stashed path as pid:/proc/86723/fd/19
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved uts namespace via fd 20 and stashed path as uts:/proc/86723/fd/20
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved ipc namespace via fd 21 and stashed path as ipc:/proc/86723/fd/21
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved cgroup namespace via fd 22 and stashed path as cgroup:/proc/86723/fd/22
DEBUG    conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3549 - The binary "/usr/bin/newuidmap" does have the setuid bit set
DEBUG    conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3549 - The binary "/usr/bin/newgidmap" does have the setuid bit set
DEBUG    conf - ../src/lxc/conf.c:lxc_map_ids:3634 - Functional newuidmap and newgidmap binary found
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_setup_limits:3251 - Limits for the unified cgroup hierarchy have been setup
DEBUG    conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3549 - The binary "/usr/bin/newuidmap" does have the setuid bit set
DEBUG    conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3549 - The binary "/usr/bin/newgidmap" does have the setuid bit set
INFO     conf - ../src/lxc/conf.c:lxc_map_ids:3632 - Caller maps host root. Writing mapping directly
NOTICE   utils - ../src/lxc/utils.c:lxc_drop_groups:1367 - Dropped supplimentary groups
INFO     start - ../src/lxc/start.c:do_start:1104 - Unshared CLONE_NEWNET
NOTICE   utils - ../src/lxc/utils.c:lxc_drop_groups:1367 - Dropped supplimentary groups
NOTICE   utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1343 - Switched to gid 0
NOTICE   utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1352 - Switched to uid 0
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved net namespace via fd 5 and stashed path as net:/proc/86723/fd/5
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/lxcnetaddbr" for container "103", config section "net"
DEBUG    network - ../src/lxc/network.c:netdev_configure_server_veth:852 - Instantiated veth tunnel "veth103i0 <--> vethBwf6qv"
DEBUG    conf - ../src/lxc/conf.c:lxc_mount_rootfs:1437 - Mounted rootfs "/var/lib/lxc/103/rootfs" onto "/usr/lib/x86_64-linux-gnu/lxc/rootfs" with options "(null)"
INFO     conf - ../src/lxc/conf.c:setup_utsname:876 - Set hostname to "docker"
DEBUG    network - ../src/lxc/network.c:setup_hw_addr:3821 - Mac address "12:70:52:F9:8E:0B" on "eth0" has been setup
DEBUG    network - ../src/lxc/network.c:lxc_network_setup_in_child_namespaces_common:3962 - Network device "eth0" has been setup
INFO     network - ../src/lxc/network.c:lxc_setup_network_in_child_namespaces:4019 - Finished setting up network devices with caller assigned names
INFO     conf - ../src/lxc/conf.c:mount_autodev:1220 - Preparing "/dev"
INFO     conf - ../src/lxc/conf.c:mount_autodev:1281 - Prepared "/dev"
DEBUG    conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:736 - Invalid argument - Tried to ensure procfs is unmounted
DEBUG    conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:759 - Invalid argument - Tried to ensure sysfs is unmounted
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2445 - Remounting "/sys/fs/fuse/connections" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections" to respect bind or remount options
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2464 - Flags for "/sys/fs/fuse/connections" were 4110, required extra flags are 14
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2508 - Mounted "/sys/fs/fuse/connections" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections" with filesystem type "none"
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2508 - Mounted "proc" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/.lxc/proc" with filesystem type "proc"
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2508 - Mounted "sys" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/.lxc/sys" with filesystem type "sysfs"
DEBUG    cgfsng - ../src/lxc/cgroups/cgfsng.c:__cgroupfs_mount:1909 - Mounted cgroup filesystem cgroup2 onto 19((null))
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.mount.hook" for container "103", config section "lxc"
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-autodev-hook" for container "103", config section "lxc"
INFO     conf - ../src/lxc/conf.c:lxc_fill_autodev:1318 - Populating "/dev"
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1402 - Bind mounted host device 16(dev/full) to 18(full)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1402 - Bind mounted host device 16(dev/null) to 18(null)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1402 - Bind mounted host device 16(dev/random) to 18(random)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1402 - Bind mounted host device 16(dev/tty) to 18(tty)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1402 - Bind mounted host device 16(dev/urandom) to 18(urandom)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1402 - Bind mounted host device 16(dev/zero) to 18(zero)
INFO     conf - ../src/lxc/conf.c:lxc_fill_autodev:1406 - Populated "/dev"
INFO     conf - ../src/lxc/conf.c:lxc_transient_proc:3804 - Caller's PID is 1; /proc/self points to 1
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_devpts_child:1780 - Attached detached devpts mount 20 to 18/pts
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_devpts_child:1866 - Created "/dev/ptmx" file as bind mount target
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_devpts_child:1873 - Bind mounted "/dev/pts/ptmx" to "/dev/ptmx"
DEBUG    conf - ../src/lxc/conf.c:lxc_allocate_ttys:1105 - Created tty with ptx fd 22 and pty fd 23 and index 1
DEBUG    conf - ../src/lxc/conf.c:lxc_allocate_ttys:1105 - Created tty with ptx fd 24 and pty fd 25 and index 2
INFO     conf - ../src/lxc/conf.c:lxc_allocate_ttys:1110 - Finished creating 2 tty devices
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_ttys:1066 - Bind mounted "pts/1" onto "tty1"
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_ttys:1066 - Bind mounted "pts/2" onto "tty2"
INFO     conf - ../src/lxc/conf.c:lxc_setup_ttys:1073 - Finished setting up 2 /dev/tty<N> device(s)
INFO     conf - ../src/lxc/conf.c:setup_personality:1946 - Set personality to "0lx0"
DEBUG    conf - ../src/lxc/conf.c:capabilities_deny:3232 - Capabilities have been setup
NOTICE   conf - ../src/lxc/conf.c:lxc_setup:4511 - The container "103" is set up
INFO     apparmor - ../src/lxc/lsm/apparmor.c:apparmor_process_label_set_at:1189 - Set AppArmor label to "lxc-103_</var/lib/lxc>//&:lxc-103_<-var-lib-lxc>:"
INFO     apparmor - ../src/lxc/lsm/apparmor.c:apparmor_process_label_set:1234 - Changed AppArmor profile to lxc-103_</var/lib/lxc>//&:lxc-103_<-var-lib-lxc>:
DEBUG    terminal - ../src/lxc/terminal.c:lxc_terminal_peer_default:696 - No such device - The process does not have a controlling terminal
NOTICE   start - ../src/lxc/start.c:start:2194 - Exec'ing "/sbin/init"
NOTICE   start - ../src/lxc/start.c:post_start:2205 - Started "/sbin/init" with pid "86926"
NOTICE   start - ../src/lxc/start.c:signal_handler:446 - Received 17 from pid 86922 instead of container init 86926
 
Docker in LXC is an unsupported configuration, since it is known to cause (subtle) problems as Docker likes to do weird stuff. There are a lot of threads on the forum about that, as well as it's mentioned in the admin guide.
So I suggest to migrate all your Docker stuff to VMs or - as you suggested yourself - use LXCs directly :)

Anyway, since this only happens once the LXC is started - could you also share the output of pct config 103, with 103 being the ID of the misbehaved CT I assume.

Also, can you start the container using pct start 103 -debug and share the output? That will provide a lot more information about what is happening. You can also look into the system log using journalctl -b afterwards, if there is anything that might be related.


As long as you don't create a new storage, delete anything, wipe it etc. on it, it should generally be fine.

And lastly the system log:

Code:
root@Home:~# journalctl -b
Feb 21 04:09:24 Home kernel: Linux version 6.2.16-15-pve (build@proxmox) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMP>
Feb 21 04:09:24 Home kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.2.16-15-pve root=/dev/mapper/pve-root ro quiet
Feb 21 04:09:24 Home kernel: KERNEL supported cpus:
Feb 21 04:09:24 Home kernel:   Intel GenuineIntel
Feb 21 04:09:24 Home kernel:   AMD AuthenticAMD
Feb 21 04:09:24 Home kernel:   Hygon HygonGenuine
Feb 21 04:09:24 Home kernel:   Centaur CentaurHauls
Feb 21 04:09:24 Home kernel:   zhaoxin   Shanghai 
Feb 21 04:09:24 Home kernel: BIOS-provided physical RAM map:
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000057fff] usable
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x0000000000058000-0x0000000000058fff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x0000000000059000-0x000000000009efff] usable
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x000000000009f000-0x00000000000fffff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000d618afff] usable
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000d618b000-0x00000000d618bfff] ACPI NVS
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000d618c000-0x00000000d61b5fff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000d61b6000-0x00000000d620efff] usable
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000d620f000-0x00000000d6a0ffff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000d6a10000-0x00000000db4cafff] usable
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000db4cb000-0x00000000db6edfff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000db6ee000-0x00000000db726fff] ACPI data
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000db727000-0x00000000dbf01fff] ACPI NVS
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000dbf02000-0x00000000dc413fff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000dc414000-0x00000000dc4fefff] type 20
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000dc4ff000-0x00000000dc4fffff] usable
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000dc500000-0x00000000dfffffff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Feb 21 04:09:24 Home kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021effffff] usable
Feb 21 04:09:24 Home kernel: NX (Execute Disable) protection: active
Feb 21 04:09:24 Home kernel: efi: EFI v2.40 by American Megatrends
Feb 21 04:09:24 Home kernel: efi: ACPI=0xdb6f9000 ACPI 2.0=0xdb6f9000 SMBIOS=0xdc2da000 ESRT=0xdc24f018 SMBIOS 3.0=0xdc2d9000 MEMATTR=0xd93de018

The console for container 103 is just blank after it runs, like this:

Capture4.PNGView attachment 63533

Thanks again for your help!
 
Docker in LXC is an unsupported configuration, since it is known to cause (subtle) problems as Docker likes to do weird stuff. There are a lot of threads on the forum about that, as well as it's mentioned in the admin guide.
So I suggest to migrate all your Docker stuff to VMs or - as you suggested yourself - use LXCs directly :)

Anyway, since this only happens once the LXC is started - could you also share the output of pct config 103, with 103 being the ID of the misbehaved CT I assume.

Also, can you start the container using pct start 103 -debug and share the output? That will provide a lot more information about what is happening. You can also look into the system log using journalctl -b afterwards, if there is anything that might be related.


As long as you don't create a new storage, delete anything, wipe it etc. on it, it should generally be fine.

Sorry for the bump! Just checking in to see if you or a colleague has had a chance to review the above?

I am wondering if I might be as well just deleting the Docker LXC and re-setting up the individual containers either in a VM or LXCs. If I were to do that, how would I go about giving those VMs/LXCs access to the data currently on the external HDD, without overwriting or wiping anything?

Thanks again!
 
Hi,

so generally, the configuration and start log of the container looks good, at least nothing jumped at me there.
I see you used some third-party scripts to create this container - they may modify system files without much care. Did you run more of them, maybe
which actively modifies system files?

And lastly the system log:
Unfortunaly, that's only a snippet of the beginning - journalctl -b normally opens it using a so-called pager, i.e. you can scroll in there using your arrow keys. But - and this might be easier - you can dump the whole journal using journalctl -b >journal.log to a file and then upload it here as an attachment.

If I were to do that, how would I go about giving those VMs/LXCs access to the data currently on the external HDD, without overwriting or wiping anything?
First of, I'd suggest you create a full backup using the built-in functionality, so that you can restore the container to it's current state at any time. This gives you a bit more room without risking losing any data.

You can mount the rootfs of the container using the pct mount 103, which will then print the path to it.
For the mountpoint, you'll need to do that yourself. Seeing as Storage is a directory based storage, the raw disk image should be located at /mnt/pve/Storage/images/103/vm-103-disk-0.raw and can be mounted using

Code:
mkdir /mnt/tmp
mount -o ro /mnt/pve/Storage/images/103/vm-103-disk-0.raw /mnt/tmp

All its contents are then simply available under /mnt/tmp/. The -o ro mounts it read-only, to ensure you do not alter anything by mistake.
Afterwards, unmount it using umount /mnt/tmp.
 
Hi,

so generally, the configuration and start log of the container looks good, at least nothing jumped at me there.
I see you used some third-party scripts to create this container - they may modify system files without much care. Did you run more of them, maybe
which actively modifies system files?


Unfortunaly, that's only a snippet of the beginning - journalctl -b normally opens it using a so-called pager, i.e. you can scroll in there using your arrow keys. But - and this might be easier - you can dump the whole journal using journalctl -b >journal.log to a file and then upload it here as an attachment.


First of, I'd suggest you create a full backup using the built-in functionality, so that you can restore the container to it's current state at any time. This gives you a bit more room without risking losing any data.

You can mount the rootfs of the container using the pct mount 103, which will then print the path to it.
For the mountpoint, you'll need to do that yourself. Seeing as Storage is a directory based storage, the raw disk image should be located at /mnt/pve/Storage/images/103/vm-103-disk-0.raw and can be mounted using

Code:
mkdir /mnt/tmp
mount -o ro /mnt/pve/Storage/images/103/vm-103-disk-0.raw /mnt/tmp

All its contents are then simply available under /mnt/tmp/. The -o ro mounts it read-only, to ensure you do not alter anything by mistake.
Afterwards, unmount it using umount /mnt/tmp.
Thanks Christoph! I was able to access the files by mounting the rootfs, and then use smb to migrate everything over to a dedicated LXC for each container. My main concern was that the HDD had been corrupted somehow, but looks like we're all good with no data loss! Going to take a little while getting everything set up again, but hopefully this will be a more stable long-term solution. Thanks again for your all your assistance and patience!
 
  • Like
Reactions: cheiss

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!