Proxmox 7 - LXC SSH Root login not working

lps90

Member
May 21, 2020
144
4
18
I really dont know.
I cant figure out why i always need to "service restart ssh" everytime i restart my lxc container
so i can manage to login as root :(
 

datdenkikniet

New Member
Mar 28, 2020
20
2
3
22
I think it would help greatly if you could post the configuration of your container. Perhaps you're using some extra settings/configurations that are (indirectly?) causing this problem. It seems as if systemd is pulling some shenanigans that are fixed by restarting the ssh service.
 
Last edited:

lps90

Member
May 21, 2020
144
4
18
I'm using the default configuration, nothing more.
Like i explained, i have Proxmox 6.4 and Proxmox 7 in 2 OVH machines using the same configuration.
I did exactly the same steps, when installing the LXC containers, so the problem is for sure something
related with Proxmox 7 because in Proxmox 6.4 everything works with no problems.

Anyway, can you tell me where i can find the lxc configs so i can post it here?
 

datdenkikniet

New Member
Mar 28, 2020
20
2
3
22
You can find the configuration in /etc/pve/lxc/<container number>.cfg. Perhaps the name, a checksum, and/or a link to the container template that you are using would help too, since the one available from Proxmox worked for me, and for Fabian, which could also indicate that you're using a different version.
 
Last edited:

lps90

Member
May 21, 2020
144
4
18
just a question.
Did you guys used an OVH machine to test it?
Did you guys tested with Proxmox 7 (upgraded from proxmox 6.4.13)?

Debian 10.7 from proxmox repository, the one i am using.
Link: http://download.proxmox.com/images/system/debian-10-standard_10.7-1_amd64.tar.gz

LXC config
Code:
arch: amd64
cores: 1
hostname: test
memory: 512
net0: name=enp1s0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=A2:B0:F1:xx:xx:xx,ip=192.168.1.92/24,type=veth
ostype: debian
rootfs: local:500/vm-500-disk-0.raw,size=8G
swap: 512
unprivileged: 1
 

datdenkikniet

New Member
Mar 28, 2020
20
2
3
22
OVH: no, I tested on two dedicated computers (but I am doubtful that this is what's causing the problem)
Proxmox 7 upgraded from 6.4.13: yes

I'm unsure what the issue could be, but it seems like it's related to your specific setup.

I noticed this line in the first startup log you posted (which does not appear to be from the default template AFAICT):
Code:
Aug 02 14:44:55 test CRON[100]: (root) CMD (./ssh.sh)
Can you find/tell us what this file does, if you are aware? Given that SSH is acting up, it may be relevant, even though it didn't show up in the second log you posted.
 
Last edited:

lps90

Member
May 21, 2020
144
4
18
Thats only a file i created to automatically restart ssh everytime the container starts (just to try to solve the problem).
But you can ignore that file, is not being executed, it was only a test that i tried to run (i removed it from cron).
 

gouthamravee

Member
May 16, 2019
17
1
8
Hey this might be a long shot, but have you checked if the LXC might be using the same IP as something else on your network? I was having weird SSH issues too and it turned out because I had used the same IP on two different systems.
 

lps90

Member
May 21, 2020
144
4
18
No, thats impossible..
I already told that this problem only ocurs when using the default dedicated server ip.
With failover ips it works like a charm.

Anyway, it works with another Debian template i tested.
Seems to be one more bug coming with Proxmox 7 ;)

I'm really thinking about going back to Proxmox 6.4-13 since it is really stable and Proxmox 7 is not stable and with bugs / problems
when comparing both versions.
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,199
1,056
164
maybe OVH does something weird that is affected by

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Network

although I am not sure how restarting sshd should help in that case. it is something specific to your setup or OVH though - since everybody else does not seem to have that problem.. I also have a VPS on OVH that is ugpraded from 6 to 7 and is not affected by any such issue either (but I don't use failover IPs there, and all access from the outside world is NATted).
 

lps90

Member
May 21, 2020
144
4
18
maybe OVH does something weird that is affected by

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Network

although I am not sure how restarting sshd should help in that case. it is something specific to your setup or OVH though - since everybody else does not seem to have that problem.. I also have a VPS on OVH that is ugpraded from 6 to 7 and is not affected by any such issue either (but I don't use failover IPs there, and all access from the outside world is NATted).

1. I do not have an OVH vps, i have an OVH dedicated server.
2. I do not have problems with FailOver ips, only with the default server ip
3. My network is NATed too

This problem is "solved" if i use a recent Debian container version (10.10).
Link: https://uk.images.linuxcontainers.o...er/amd64/default/20210803_05:24/rootfs.tar.xz
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,199
1,056
164

Death_Bandit

New Member
May 29, 2020
1
0
1
23
Try this:

Bash:
systemctl mask ssh.socket
systemctl mask sshd.socket

systemctl disable sshd
systemctl enable ssh

reboot
Thank your very much that worked for me. I needed to change the sshd port, but after every reboot of the container the port switchted back to 22. Also a nice benefit: the ssh fingerprint doesn't change anymore.
 

norderstedt

Active Member
Nov 28, 2016
51
5
28
37
Hamburg
uniquoo.com
I have to resurrect this thread because i encountered the same problem today after i've setup a brand new Proxmox 7 HV and a Debian 11 LXC container (official image) in it. I always change SSHD-Ports, thats why i had the same problem (i think the OP did the same, otherwise this error would not have come to light):

The problem is described correctly in previous posts in this thread. Auth.log and journalctl -b show that the system is unable to get a seat for the session the user is requesting.

The reason and solution for this is the following:

1. Debian 11 LXC Template now ships sshd as socketed service. This means, that systemd will only start the ssh daemon when a user is opening a connection towards the ssh-port and tries to login. When there is noone connected, sshd is not started.

2. If you want to change the SSHD-Port, simply changing it in sshd_config is not enough. You have to change the port in the systemd socket configuration file aswell, otherwise the seat error will occur.

Code:
sed -i "s/#Port 22/Port 12345/" /etc/ssh/sshd_config
sed -i "s/ListenStream=22/ListenStream=12345/" /etc/systemd/system/sockets.target.wants/ssh.socket

Systemd now listens for incoming SSH-Connections on the same port as configured in /etc/ssh/sshd_config. You can now happily restart your LXC without having to restart sshd in it every time :) Took me a while to figure this out. Hope i could safe someone elses time by leaving this here.
 
Last edited:

tauceti

New Member
May 11, 2021
17
3
3
39
I have to resurrect this thread because i encountered the same problem today after i've setup a brand new Proxmox 7 HV and a Debian 11 LXC container (official image) in it. I always change SSHD-Ports, thats why i had the same problem (i think the OP did the same, otherwise this error would not have come to light):

The problem is described correctly in previous posts in this thread. Auth.log and journalctl -b show that the system is unable to get a seat for the session the user is requesting.

The reason and solution for this is the following:

1. Debian 11 LXC Template now ships sshd as socketed service. This means, that systemd will only start the ssh daemon when a user is opening a connection towards the ssh-port and tries to login. When there is noone connected, sshd is not started.

2. If you want to change the SSHD-Port, simply changing it in sshd_config is not enough. You have to change the port in the systemd socket configuration file aswell, otherwise the seat error will occur.

Code:
sed -i "s/#Port 22/Port 12345/" /etc/ssh/sshd_config
sed -i "s/ListenStream=22/ListenStream=12345/" /etc/systemd/system/sockets.target.wants/ssh.socket

Systemd now listens for incoming SSH-Connections on the same port as configured in /etc/ssh/sshd_config. You can now happily restart your LXC without having to restart sshd in it every time :) Took me a while to figure this out. Hope i could safe someone elses time by leaving this here.
thank you very much!!!
 

jfp

New Member
Apr 28, 2021
1
0
1
58
Just adding to the thread, as this fixed my issue.
However this is the wrong way to reconfigure systemd services, those changes will be overridden by an apt update.

You should use the systemd override file instead:

systemctl edit ssh.socket
#or
mkdir -p /etc/systemd/system/ssh.socket.d/
cat > /etc/systemd/system/ssh.socket.d/override.conf << EOF
[Socket]
ListenStream=12345
EOF
systemctl daemon-reload
systemctl restart ssh.socket

and the same for the sshd server config

mkdir -p /etc/ssh/sshd_config.d/
cat > /etc/ssh/sshd_config.d/sshd-override.conf << EOF
Port 12345
EOF
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!