/etc/pve/lxc/<container number>.cfg
. Perhaps the name, a checksum, and/or a link to the container template that you are using would help too, since the one available from Proxmox worked for me, and for Fabian, which could also indicate that you're using a different version.arch: amd64
cores: 1
hostname: test
memory: 512
net0: name=enp1s0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=A2:B0:F1:xx:xx:xx,ip=192.168.1.92/24,type=veth
ostype: debian
rootfs: local:500/vm-500-disk-0.raw,size=8G
swap: 512
unprivileged: 1
Aug 02 14:44:55 test CRON[100]: (root) CMD (./ssh.sh)
maybe OVH does something weird that is affected by
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Network
although I am not sure how restarting sshd should help in that case. it is something specific to your setup or OVH though - since everybody else does not seem to have that problem.. I also have a VPS on OVH that is ugpraded from 6 to 7 and is not affected by any such issue either (but I don't use failover IPs there, and all access from the outside world is NATted).
does it also go away if you upgrade the container to >= 10.10 ?1. I do not have an OVH vps, i have an OVH dedicated server.
2. I do not have problems with FailOver ips, only with the default server ip
3. My network is NATed too
This problem is "solved" if i use a recent Debian container version (10.10).
Link: https://uk.images.linuxcontainers.o...er/amd64/default/20210803_05:24/rootfs.tar.xz
@fabian I think not, but i will test it and let you knowdoes it also go away if you upgrade the container to >= 10.10 ?
systemctl mask ssh.socket
systemctl mask sshd.socket
systemctl disable sshd
systemctl enable ssh
Thank your very much that worked for me. I needed to change the sshd port, but after every reboot of the container the port switchted back to 22. Also a nice benefit: the ssh fingerprint doesn't change anymore.Try this:
Bash:systemctl mask ssh.socket systemctl mask sshd.socket systemctl disable sshd systemctl enable ssh
reboot
sed -i "s/#Port 22/Port 12345/" /etc/ssh/sshd_config
sed -i "s/ListenStream=22/ListenStream=12345/" /etc/systemd/system/sockets.target.wants/ssh.socket
thank you very much!!!I have to resurrect this thread because i encountered the same problem today after i've setup a brand new Proxmox 7 HV and a Debian 11 LXC container (official image) in it. I always change SSHD-Ports, thats why i had the same problem (i think the OP did the same, otherwise this error would not have come to light):
The problem is described correctly in previous posts in this thread. Auth.log and journalctl -b show that the system is unable to get a seat for the session the user is requesting.
The reason and solution for this is the following:
1. Debian 11 LXC Template now ships sshd as socketed service. This means, that systemd will only start the ssh daemon when a user is opening a connection towards the ssh-port and tries to login. When there is noone connected, sshd is not started.
2. If you want to change the SSHD-Port, simply changing it in sshd_config is not enough. You have to change the port in the systemd socket configuration file aswell, otherwise the seat error will occur.
Code:sed -i "s/#Port 22/Port 12345/" /etc/ssh/sshd_config sed -i "s/ListenStream=22/ListenStream=12345/" /etc/systemd/system/sockets.target.wants/ssh.socket
Systemd now listens for incoming SSH-Connections on the same port as configured in /etc/ssh/sshd_config. You can now happily restart your LXC without having to restart sshd in it every time Took me a while to figure this out. Hope i could safe someone elses time by leaving this here.