[SOLVED] ssh service not started in container upon reboot...bullseye template

tauceti

Member
May 11, 2021
22
5
8
42
Hi guys,

today I installed the latest 7.0 Proxmox image and then installed a container with the debian 11 bullseye template image. When starting the container the ssh server is enabled but not started. I can manually start the service but upon reboot it is not started. I don't get it! Container is nested.
Firewall is off. Network is either a fixed ip or also per dhcp. Both didn't work.
Never had that issue with the buster template

Thanks and best regards
tauceti
 
Last edited:
Yes that is what I did. It doesn’t start after reboot. I have changed the port from 22 to another. But it only works when manually starting seh after container reboot via systrmctl
 
Last edited:
root@temp:~# systemctl status sshd
* ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:sshd(8)
man:sshd_config(5)
root@temp:~# systemctl start ssh
root@temp:~# systemctl status sshd
* ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-09-13 18:48:33 UTC; 1s ago
Docs: man:sshd(8)
man:sshd_config(5)
Process: 314 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
Main PID: 315 (sshd)
Tasks: 1 (limit: 18433)
Memory: 2.2M
CPU: 12ms
CGroup: /system.slice/ssh.service
`-315 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups

Sep 13 18:48:33 temp systemd[1]: Starting OpenBSD Secure Shell server...
Sep 13 18:48:33 temp sshd[315]: Server listening on 0.0.0.0 port 2222.
Sep 13 18:48:33 temp systemd[1]: Started OpenBSD Secure Shell server.
Sep 13 18:48:33 temp sshd[315]: Server listening on :: port 2222.
 
nope still with port 22 the service is not running...
what I get in journalctl is strange though after restarting container:

...
Sep 13 18:48:12 temp systemd[1]: Starting System Logging Service...
Sep 13 18:48:12 temp systemd[1]: systemd-logind.service: Attaching egress BPF program to cgroup /sys/fs/cgroup/system.slice/systemd-logind.service >
Sep 13 18:48:12 temp systemd[1]: Starting User Login Management...
Sep 13 18:48:12 temp systemd-networkd[71]: Failed to increase receive buffer size for general netlink socket, ignoring: Operation not permitted
Sep 13 18:48:12 temp cron[88]: (CRON) INFO (pidfile fd = 3)
Sep 13 18:48:12 temp cron[88]: (CRON) INFO (Running @reboot jobs)
Sep 13 18:48:12 temp systemd-networkd[71]: Enumeration completed
Sep 13 18:48:12 temp systemd[1]: Started Network Service.
Sep 13 18:48:12 temp systemd[1]: Starting Wait for Network to be Configured...
Sep 13 18:48:12 temp systemd[1]: Starting Network Name Resolution...
Sep 13 18:48:12 temp systemd[1]: Started System Logging Service.
Sep 13 18:48:12 temp rsyslogd[91]: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.2102.0]
Sep 13 18:48:12 temp rsyslogd[91]: imklog: cannot open kernel log (/proc/kmsg): Permission denied.
Sep 13 18:48:12 temp rsyslogd[91]: activation of module imklog failed [v8.2102.0 try https://www.rsyslog.com/e/2145 ]

...
Sep 13 18:48:20 temp login[146]: pam_unix(login:session): session opened for user root(uid=0) by LOGIN(uid=0)
Sep 13 18:48:20 temp login[146]: pam_systemd(login:session): Failed to create session: Seat has no VTs but VT number not 0
...
are the errors normal?
 
it is strange though:
host node on IP1 has ssh on port 22 running --> works
If I reboot container then the ssh service is disabled in the container but I still cann connect to it via the 22 port but with the IP2 of the container...
is this because of nesting?
I want for each container an own ssh port...


if I start the ssh service manually on the container then I can only connect to it via the defined 2222 port as it should! But as said above ssh service is not started after reboot of the container

Seems to be the same issue here:
https://forum.proxmox.com/threads/auto-start-sshd.38181/#post-408670
but not resolved

OMG I got it:
https://forum.proxmox.com/threads/proxmox-7-lxc-ssh-root-login-not-working.93752/page-2#post-415118
Thanks guys...well that was really hard...it was because of the ssh socked in new lxc template where I forgot to configure the port also there
/etc/systemd/system/sockets.target.wants/ssh.socket

damn how should you know that :(
 
Last edited:
I didn't know that either. Been using Debian for 15 years. I'm sure this is better for some use-case but it breaks old configurations, which is one thing that annoys me about the systemd people. They have some good ideas but they also don't seem to care that much about keeping things working through upgrades.

I ran into a similar problem with sssd. It kept throwing errors because of the same socket activation stuff. The errors were a little more explicit though.
 
  • Like
Reactions: tauceti
I also had this behavior.
The problem in my case was that ssh.socket got enabled somehow.
When disabling ssh.socket, ssh.service does start normally on boot.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!