[SOLVED] SSH doesn't work as expected in LXC

simonliii

New Member
May 27, 2019
5
0
1
25
So I'm having a very frustrating experience using lxc containers at the moment. SSH doesn't start automatically on boot, and if you try to manually start it with "systemctl start/restart ssh" the terminal does nothing and you have to cancel with ctrl+c.

I can however restart it with "service ssh restart" but it still acts weird. It doesn't let users connect and gives the error "Access denied for 'user' by PAM account configuration", but then if I let the container be on for another 10 minutes it suddenly works?!

Oh and I've reproduced this behaviour in both debian 9 and CentOS 7 containers. Everything works as it should in my VMs.

I would be so greatful if someone could help me figure this out without having to wipe and reinstall proxmox.
 
Try,

apt update && apt dist-upgrade -y
apt install openssh-client openssh-server
check "Port" & "PermitRootLogin yes" => nano /etc/ssh/sshd_config
 
Try,

apt update && apt dist-upgrade -y
apt install openssh-client openssh-server
check "Port" & "PermitRootLogin yes" => nano /etc/ssh/sshd_config

Okay so both of the first commands did nothing, "0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded." and I'm using the correct port, PermitRootLogin is set to yes and I have AllowUsers myuser at the bottom of the document.

It seems as if its ignoring the sshd_config at first or something because it won't let me log in with either root or myuser, but if I let the machine sit there for a couple of minutes in idle it somehow starts to work...
 
hi.

what do you see in /var/log/syslog and auth.log inside the CT?

can you post a container config? (`pct config CTID`)
 
hi.

what do you see in /var/log/syslog and auth.log inside the CT?

can you post a container config? (`pct config CTID`)

Container Config:
arch: amd64
cmode: shell
cores: 3
hostname: Minecraft-LXC
memory: 2000
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=C2:60:8E:E3:F6:67,ip=dhcp,ip6=dhcp,type=veth
ostype: debian
rootfs: VMs_Containers:103/vm-103-disk-0.raw,size=8G
swap: 512
unprivileged: 1

A bit of an update as well, it now seems systemctl restart ssh works, but it takes like 5 minutes to complete. systemctl start ssh still just hangs.

This is ssh status on boot:
SnDx8O3.png
 

Attachments

  • auth.log
    62.6 KB · Views: 6
  • syslog.txt
    37.8 KB · Views: 5
hi.

EDIT:
from syslog:
Code:
May 28 14:14:29 Minecraft-LXC systemd[1]: networking.service: Start operation timed out. Terminating.

which could be a reason. or maybe there's something wrong with your systemd version (some containers have trouble playing with it). how long did you have this container for? maybe it was a privileged one before?


btw in auth.log i see a lot of failed attempts on passwords. are you sure you're not getting attacked?
 
Container was set up 3 days ago. Issue is present in all new containers I create. But not in one I've had for 2 weeks. The failed attempts are from me 100%.

Thanks, will investigate the networking.service issue.
 
Container was set up 3 days ago. Issue is present in all new containers I create
we changed the gui default for creating containers. it created privileged containers before, now you need to uncheck the unprivileged box if you want a privileged container.

maybe that's your problem?

you can try to convert your unprivileged container to privileged by:
container -> backup -> backup now

and then
choose your backup in the list -> restore -> uncheck unprivileged

EDIT:

also check out this thread[0]

[0]: https://forum.proxmox.com/threads/auto-start-sshd.38181/#post-227279
 
Last edited:
we changed the gui default for creating containers. it created privileged containers before, now you need to uncheck the unprivileged box if you want a privileged container.

maybe that's your problem?

you can try to convert your unprivileged container to privileged by:
container -> backup -> backup now

and then
choose your backup in the list -> restore -> uncheck unprivileged

EDIT:

also check out this thread[0]

[0]: https://forum.proxmox.com/threads/auto-start-sshd.38181/#post-227279

That fixed it!

Copy/pasting fix here for anyone else that runs into this.

sudo nano /etc/systemd/system/network-online.target.wants/networking.service
Change
TimeoutStartSec=5min
to
TimeoutStartSec=1sec

Still, the setting in my older container is still at 5min and works as it should. That one is also unprivileged. But I guess that doesn't matter as long as I don't find other stuff that doesn't work.
 
i'm glad it's fixed. you can mark the thread [SOLVED] by editing your first post, so that others know what to expect.
 
This issue happens on PVE 7.1-8 and Ubuntu 20.04 LXC template.
There's no such file to edit as last comment :(
/etc/systemd/system/network-online.target.wants/networking.service
 
I have also the same problem. Tested on Debian 11 and Ubuntu 20.04 LXC containers.
I'm running PVE 7.1-8.
please describe... and post relevant configuration files (like the container config)
 
Sshd does not start automatically. You have to manually start it.
works fine here with our debian 11 and ubuntu 20.04 templates on PVE 7.1-8... which template are you using?

can you post the container configuration? (pct config CTID)
 
systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enable>
Active: inactive (dead)
Docs: man:sshd(8)
man:sshd_config(5)

template is ubuntu-20.04-standard 20.04-1_amd64.tar.gz
root@xxxxx:~# pct config 100
arch: amd64
cores: 1
features: nesting=1
hostname: test-vm
memory: 1024
net0: name=eth0,bridge=vmbr0,firewall=1,gw=xxxx,hwaddr=02:00:00:xx:xx:6b,ip=51.xx.xx.xx/32,type=veth
ostype: ubuntu
rootfs: local:100/vm-100-disk-0.raw,size=8G
swap: 1024
unprivileged: 1
 
Last edited:
  • Like
Reactions: ledufakademy
systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enable>
Active: inactive (dead)
Docs: man:sshd(8)
man:sshd_config(5)
can you check ss -antlp inside the container and look for port 22?

works fine here without a manual restart of the ssh service. i also see the "inactive (dead)" but the socket is actually running:
Code:
$ systemctl status ssh       
* ssh.service - OpenBSD Secure Shell server
     Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:sshd(8)
             man:sshd_config(5)

$ systemctl status ssh.socket
* ssh.socket - OpenBSD Secure Shell server socket
     Loaded: loaded (/lib/systemd/system/ssh.socket; enabled; vendor preset: enabled)
     Active: active (listening) since Mon 2022-01-03 13:05:17 UTC; 2min 15s ago
     Listen: [::]:22 (Stream)

can also access it over the network normally...
 
can you check ss -antlp inside the container and look for port 22?

works fine here without a manual restart of the ssh service. i also see the "inactive (dead)" but the socket is actually running:
Code:
$ systemctl status ssh     
* ssh.service - OpenBSD Secure Shell server
     Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:sshd(8)
             man:sshd_config(5)

$ systemctl status ssh.socket
* ssh.socket - OpenBSD Secure Shell server socket
     Loaded: loaded (/lib/systemd/system/ssh.socket; enabled; vendor preset: enabled)
     Active: active (listening) since Mon 2022-01-03 13:05:17 UTC; 2min 15s ago
     Listen: [::]:22 (Stream)

can also access it over the network normally...
Same here.
The problem for me is, that all changes in the container's /etc/ssh/sshd_config are completely ignored unless I restart the SSH server by hand.

Code:
~#  pct config 108
arch: amd64
cores: 4
features: nesting=1
hostname: foundry
memory: 2048
nameserver: 2620:fe::fe
net0: name=eth1,bridge=vmbr1,firewall=1,gw=10.0.0.1,hwaddr=A6:aa:aa:aa:aa:aa,ip=10.0.0.108/8,type=veth
net1: name=eth0,bridge=vmbr0,firewall=1,gw6=2a01:aaa:aaa:aaa::2,hwaddr=22:aa:aa:aa:aa:65,ip6=2a01:aaa:aaa:aaa::108/64,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-108-disk-0,size=10G
swap: 0
unprivileged: 1
 
Last edited:
that all changes in the container's /etc/ssh/sshd_config are completely ignored unless I restart the SSH server by hand.
that's normal though? when you change the config file you'd need to restart/reload the service.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!