auto start sshd

Having the same issue here. SSH won't start on a brand new Ubuntu 20.04 LXC container. I also tried a Ubuntu 18.04 container, as well as a 20.04 privileged container, all having the same issue.

Host info:
Code:
pve-manager/7.0-10/d2f465d3 (running kernel: 5.11.22-3-pve)

I can attach to the LXC container from the host and start SSH manually. If I enable the service (systemctl enable ssh) it appears to work, but after a reboot of the LXC, SSH is dead.
Code:
● ssh.service - OpenBSD Secure Shell server
     Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:sshd(8)
             man:sshd_config(5)

Exactly mate ;)
But if you install for example one lxc from this link, it will work. Test it, just so i can have your opinion:
https://uk.images.linuxcontainers.org/images/

(at least i tested with the last Debian 10 version from that link and it worked with no problems!)
 
Same problem here. I tried with
  • ubuntu-21.04-standard_21.04-1_amd64.tar.gz
  • ubuntu-20.04-standard_20.04-1_amd64.tar.gz
  • ubuntu-20.10-standard_21.10-1_amd64.tar.gz
All with the same problem. After the reboot, the ssh service is dead:

Code:
root:~# systemctl status ssh
* ssh.service - OpenBSD Secure Shell server
     Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:sshd(8)
             man:sshd_config(5)

The interesting thing is that the containers that were created in proxmox 6 are running OK after the upgrade to proxmox 7. It's only the newly created containers that have this problem.

Here is the code to create the container:

Bash:
pct create 103 local:vztmpl/ubuntu-20.04-standard_20.04-1_amd64.tar.gz \
    --unprivileged 1  \
    --features keyctl=1,nesting=1 \
    --hostname test1 \
    --cores 2 \
    --memory 8192 \
    --swap 8192 \
    --storage local-lvm \
    --rootfs 50  \
    --password \
    --net0 name=eth0,bridge=vmbr0,tag=9,ip=dhcp,type=veth \
    --onboot true \
    --description "ubuntu 20.04 to host docker servers"

pct start 103

even removing
Code:
    --unprivileged 1  \
    --features keyctl=1,nesting=1 \

did not help.
 
Last edited:
So it seems that instead of having the sshd service running all the time, you can use the systemd socket which is waiting for connections and spawn a sshd service session on demand. This saves on resources. More reading about this: http://0pointer.de/blog/projects/socket-activated-containers.html

The mask command creates a symlink in /etc/systemd/system/socket.target.wants pointing to /dev/null, overriding the default config of the socket. Another way, that seems to me a bit cleaner is to disable the socket and enable the service:

Code:
systemctl disable ssh.socket
systemctl enable sshd.service
 
So it seems that instead of having the sshd service running all the time, you can use the systemd socket which is waiting for connections and spawn a sshd service session on demand. This saves on resources. More reading about this: http://0pointer.de/blog/projects/socket-activated-containers.html

The mask command creates a symlink in /etc/systemd/system/socket.target.wants pointing to /dev/null, overriding the default config of the socket. Another way, that seems to me a bit cleaner is to disable the socket and enable the service:

Code:
systemctl disable ssh.socket
systemctl enable sshd.service

Ahhhh, this makes so much sense then. Heh. I am a dumbass. I switched my SSH to be on a different port and that's when my problems began. Of course now I know why.

Thanks for that! I shall investigate how to switch systemd socket to look at a different port instead, if it's possible
 
this is not working for me
When bumping such old threads it would be good to add at least some info about your PVE version, what distro runs in the container in which version, is openssh even installed...
 
and this apply to aLL my container after rebooting my PVE node !

i simply can't access remotely to all my container ....

channel 0: open failed: connect failed: Connection refused
stdio forwarding failed
kex_exchange_identification: Connection closed by remote host


... because all my container's sshd service are broken !
:mad:
 
Last edited:
magically :
systemctl disable ssh.socket
systemctl enable ssh.service
reboot

Do the job : this is not professional at all .....
 
PVE 7.2-3
LXC Ubuntu Server 20.04.4 LTS

Same problem. SSH service no start after reboot/relaunch

That solution works for me:
systemctl disable ssh.socket
systemctl enable ssh.service
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!