HI All,
I have been going at this for days, used Google AI and ChatGPT with multiple conversation to try and figure this issue out.
Problem:
I have created a VM template so that I can create new VM's easily with cloud-init config. I followed this video exactly as my last ditch effort (https://www.youtube.com/watch?v=7VlNl6Z-qao). The VM's creates absolutely fine, and in the Proxmox UI console I can get to the newly created VM and log in with my credentials, happy days. However, I cannot get ssh with keys to work at all. This is quite important as I want to automate things and I need passwordless SSH with SSH Keys to work.
Some details about my setup:
VE 9.0.11 - Have not messed with anything really since setup, did the upgrade from v8 using the upgrade guide and went really smooth too, have had no issues at all with the host at all.
Here is my template file as it is right now:
I generated that SSH key in windows using
By default when the vm creates it has
Here is the VM config that was created
To eliminate windows OpenSSH being the issue, I installed putty, created the .ppk private key and tried to connect via Putty instead, however I get the same result.
This is the windows version:
I then tried to follow some additional debugging by setting the
Like I said, it's been days now where i have been struggling, I have looked up multiple guides on how people have set this up, and for them the ssh with keys just works, for me tho it does not and I'm completely stuck now. I had this same issue with terrafrom which is why I went back to basics just in the proxmox UI to try and create things manually to get that working before going back to terraform to automate. It is super frustrating as the video I linked at the start, it just works for this guy without messing with vendor files and custom ssh configuration on the cloud image.
Maybe some additional helpful things here:
Proxmox node has firewall on with no firewall rules defined
Datacenter has firewall off
VM instance that was created has firewall off
My vmbr0 is vlan aware, only have the 1 NIC on the machine so nothing fancy
Any help would be much appreciated
I have been going at this for days, used Google AI and ChatGPT with multiple conversation to try and figure this issue out.
Problem:
I have created a VM template so that I can create new VM's easily with cloud-init config. I followed this video exactly as my last ditch effort (https://www.youtube.com/watch?v=7VlNl6Z-qao). The VM's creates absolutely fine, and in the Proxmox UI console I can get to the newly created VM and log in with my credentials, happy days. However, I cannot get ssh with keys to work at all. This is quite important as I want to automate things and I need passwordless SSH with SSH Keys to work.
Some details about my setup:
VE 9.0.11 - Have not messed with anything really since setup, did the upgrade from v8 using the upgrade guide and went really smooth too, have had no issues at all with the host at all.
Here is my template file as it is right now:
Bash:
# Template Config
root@proxmox:~# qm config 9000
agent: 1
bios: ovmf
boot: order=scsi0
cipassword: **********
ciuser: ubuntu
cores: 2
cpu: x86-64-v2-AES
efidisk0: local-lvm:base-9000-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ipconfig0: ip=192.168.200.5/24,gw=192.168.200.1
machine: q35
memory: 2048
meta: creation-qemu=10.0.2,ctime=1761126002
name: ubuntu-cloudinit
net0: virtio=BC:24:11:C8:9F:77,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:base-9000-disk-1,discard=on,iothread=1,size=32G,ssd=1
scsi1: local-lvm:vm-9000-cloudinit,media=cdrom # I had this on ide1 and then changed it after some research (made no difference tho)
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=0c9705c7-9ed5-4b10-a3ae-55a1877bbfea
sockets: 1
sshkeys: ssh-ed25519%20AAAAC3NzaC1lZDI1NTE5AAAAIPsYbZdYSn%2BWG8cte%2BLiyNq1Juje%2FFJthuutnx4kMELq%20ubuntu
template: 1
vga: serial0
vmgenid: ba9073ef-ca5b-4c50-9199-49bb3b494381
# Cloud Init User Config
root@proxmox:~# qm cloudinit dump 9000 user
#cloud-config
hostname: ubuntu-cloudinit
manage_etc_hosts: true
fqdn: ubuntu-cloudinit
user: ubuntu
password: <my very strong password>
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNLt12WAA3BoIHydbD672tHVElj+3GgJ1wZuQzEH/BY ubuntu
chpasswd:
expire: False
users:
- default
package_upgrade: true
ssh-keygen -t ed25519 -C "ubuntu"
then filled in the cloud-init section on the template, uploading the public key file to the ssh public keys setting, and finally then hit clone to create the VM. This template above is based on the Ubuntu 24.04 .img (just latest current one right now) that was renamed like in the video to a QCow2 format and resized. Previously I only used the .img directly without renaming and resizing it etc.By default when the vm creates it has
ssh.socket
listening, with ssh.service
disabled, my understanding is that when the socket detects connection it enables the ssh.service
. However, when I try and connect this does not happen, it's like the VM is not even receiving traffic or something.
Bash:
ubuntu@docker-test:~$ systemctl status ssh
○ ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/usr/lib/systemd/system/ssh.service; disabled; preset: ena>
Active: inactive (dead)
TriggeredBy: ● ssh.socket
Docs: man:sshd(8)
man:sshd_config(5)
Here is the VM config that was created
Bash:
root@proxmox:~# qm config 202
agent: 1
bios: ovmf
boot: order=scsi0
cipassword: **********
ciuser: ubuntu
cores: 2
cpu: x86-64-v2-AES
efidisk0: local-lvm:vm-202-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ipconfig0: ip=192.168.200.5/24,gw=192.168.200.1
machine: q35
memory: 2048
meta: creation-qemu=10.0.2,ctime=1761126002
name: docker-test
net0: virtio=BC:24:11:EA:01:C6,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-202-disk-1,discard=on,iothread=1,size=32G,ssd=1
scsi1: local-lvm:vm-202-cloudinit,media=cdrom,size=4M
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=0f71bec2-26c1-4e6c-b5c5-dfa27922680a
sockets: 1
sshkeys: ssh-ed25519%20AAAAC3NzaC1lZDI1NTE5AAAAIBNLt12WAA3BoIHydbD672tHVElj%2B3GgJ1wZuQzEH%2FBY%20ubuntu%0A
vga: serial0
vmgenid: a452e712-e356-4ce9-b310-c9f98f42db61
# The user config
root@proxmox:~# qm cloudinit dump 202 user
#cloud-config
hostname: docker-test
manage_etc_hosts: true
fqdn: docker-test
user: ubuntu
password: <my super secret password>
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNLt12WAA3BoIHydbD672tHVElj+3GgJ1wZuQzEH/BY ubuntu
chpasswd:
expire: False
users:
- default
package_upgrade: true
To eliminate windows OpenSSH being the issue, I installed putty, created the .ppk private key and tried to connect via Putty instead, however I get the same result.
This is the windows version:
Code:
➜ ssh ubuntu@192.168.200.5
ubuntu@192.168.200.5: Permission denied (publickey).
➜ ssh ubuntu@192.168.200.5 -i .\.ssh\id_ed25519
ubuntu@192.168.200.5: Permission denied (publickey).
I then tried to follow some additional debugging by setting the
/etc/ssh/sshd_config
log level to verbose, stopping ssh.socket and running a debug sshd instance sudo /usr/sbin/sshd -D -ddd -p 2222
, initially this failed to load because the VM was missing /run/sshd
. I proceeded to go and create that and give it the permissions it needed so the daemon could run. Tried connecting again from windows and got:
Code:
➜ ssh ubuntu@192.168.200.5 -vvv -p 2222
OpenSSH_for_Windows_9.5p2, LibreSSL 3.8.2
debug3: Failed to open file:C:/Users/shane/.ssh/config error:2
debug3: Failed to open file:C:/ProgramData/ssh/ssh_config error:2
debug2: resolve_canonicalize: hostname 192.168.200.5 is address
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> 'C:\\Users\\shane/.ssh/known_hosts'
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> 'C:\\Users\\shane/.ssh/known_hosts2'
debug3: ssh_connect_direct: entering
debug1: Connecting to 192.168.200.5 [192.168.200.5] port 2222.
debug3: finish_connect - ERROR: async io completed with error: 10061, io:000001E3AB2BAB30
debug1: connect to address 192.168.200.5 port 2222: Connection refused
ssh: connect to host 192.168.200.5 port 2222: Connection refused
Like I said, it's been days now where i have been struggling, I have looked up multiple guides on how people have set this up, and for them the ssh with keys just works, for me tho it does not and I'm completely stuck now. I had this same issue with terrafrom which is why I went back to basics just in the proxmox UI to try and create things manually to get that working before going back to terraform to automate. It is super frustrating as the video I linked at the start, it just works for this guy without messing with vendor files and custom ssh configuration on the cloud image.
Maybe some additional helpful things here:
Proxmox node has firewall on with no firewall rules defined
Datacenter has firewall off
VM instance that was created has firewall off
My vmbr0 is vlan aware, only have the 1 NIC on the machine so nothing fancy
Any help would be much appreciated
