Can't ssh into privileged container

NinjaSeven

New Member
Jul 23, 2024
1
0
1
Hello Everyone,

I'm encountering an issue with SSH on one of my privileged containers

Setup Details:​

  • Proxmox VE Version: 8.2.4 (did an update today to make sure there are no bugs)
  • Container Template: ubuntu-23.04-standard_23.04-1_amd64.tar.zst / also tried 24.04-2
  • Container Type: Privileged and Unprivileged

Issue Description:​

I have multiple containers set up using the same template. SSH works flawlessly on the unprivileged containers, but I encounter issues with privileged containers.

When i try to connect i get this error:
ssh: connect to host 10.1.8.4 port 22: Connection refused

When I try to start SSH on a privileged container, I get the following error after running /usr/sbin/sshd -T:

Missing privilege separation directory: /var/run/sshd

Manually creating the directory and setting the appropriate permissions fixes the problem temporarily:

mkdir -p /var/run/sshd
chmod 0755 /var/run/sshd
systemctl restart ssh

However, this fix only lasts until the next reboot. The directory /var/run/sshd is missing again after a reboot, causing SSH to fail to start.

I use terraform to deploy the container:

JSON:
resource "proxmox_lxc" "jellyfin" {
  target_node     = "pphost04"
  hostname        = "cJellyfin"
  ostemplate      = "templates:vztmpl/ubuntu-23.04-standard_23.04-1_amd64.tar.zst"
  unprivileged    = false
  ostype          = "ubuntu"
  ssh_public_keys = file(var.pub_ssh_key)
  start           = true
  onboot          = true
  memory          = 2048
  password        = random_password.jellyfin_password.result
  swap            = 1024
  cores           = 2
  vmid            = 0
  rootfs {
    storage = "local-lvm"
    size    = "10G"
  }

  network {
    name   = "eth0"
    bridge = "vmbr0"
    gw     = var.gateway_ip
    ip     = var.jellyfin_ip
  }

  lifecycle {
    ignore_changes = [
      target_node,
    ]
  }
}

When i use the same code but set unprivileged to 'true' everything works. I'm not really that familiar with LXC, so maybe there is some limitation in place i dont understand. Any ideas why this is happening and how i can fix this?
 
I know it's an old Post, but I have found the problem.
The Container uses another UID for "root" (i.e. 100000), so all configuration Files under /etc are owned by a user 100000.

After switching to priviledged the UID of root is set to 0. There can be a plethora of problems arising from this (like creating a user and try to modify it's password will result in error).

After some research it looks like the only way is to create a Backup and Restore it priviledged.
As i just installed it I have recreated the Container.

Hope this helps others.