PVE 2.2. upgrade results in authorized_keys content lost...

hk@

Active Member
Feb 10, 2010
221
5
38
Vienna
kapper.net
Hi
the latest update results in cleaning the content of /root/.ssh/authorized_keys -> /etc/pve/priv/authorized_keys.

suddenly some custom access to the hosts is lost.

Therefore my question is: is there a correct way to get root-access using ssh on PVE-Hardware-Nodes, that will not be cancelled out?

Or even better: could we get the old behaviour back, meaning ssh-keys not getting dumped if PVE doesn't know about them?

Kindest Regards,
hk.
 

hk@

Active Member
Feb 10, 2010
221
5
38
Vienna
kapper.net
before the update there were additional keys in "/root/.ssh/authorized_keys" - now only the "root@servername" key is there and no other key.

regards
hk
 

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
17,113
513
133
Austria
www.proxmox.com
before the update there were additional keys in "/root/.ssh/authorized_keys" - now only the "root@servername" key is there and no other key.

Please can you try to assemble a test case, which we can reproduce here?
 

hk@

Active Member
Feb 10, 2010
221
5
38
Vienna
kapper.net
Hi
it is really pretty simple: four days ago everythinig was fine, then the latest updates kicked in:
Updates for Proxmox VE 2.2

We just moved a bunch of packages to our stable repository, including a lot of important bug fixes and code cleanups.

Release Notes

- redhat-cluster-pve (3.1.93-2) unstable; urgency=low

  • correct init script dependency, so that open-iscsi is stopped after cman.

- qemu-server (2.0-68) unstable; urgency=low

  • vzdump: store drive in correct order (sort) to avoid confusion
  • fix allocation size in qmrestore
  • vzdump: restore sata drives correctly
  • remove hardcoded blowfish cipher

- pve-sheepdog (0.5.4-1) unstable; urgency=low

  • update to sheepdog 0.5.4

- pve-manager (2.2-30) unstable; urgency=low

  • add Norwegian (Bokmal and Nynorsk) translations.
  • fix bug #276: create root mount point
  • remove hardcoded blowfish cipher
  • fix RRD images caching problems: new create_rrd_graph returns rrd png data inline - no need to read data from file.

- pve-kernel-2.6.32 (2.6.32-82) unstable; urgency=low

  • update to vzkernel-2.6.32-042stab063.2.src.rpm
  • add fix for openvz cpt on NFS (fix bug #71)

- pve-cluster (1.0-32) unstable; urgency=low

  • do not pass undef tp RRDs::graphv (use '' instead)
  • also initialize /root/.ssh with 'pvecm updatecerts'. That way we create a default /root/.ssh/config when updating this package.
  • create /root/.ssh/config and set default Chiphers
  • fix caching problems in create_rrd_graph (do not save any data to files)

- libpve-storage-perl (2.0-36) unstable; urgency=low

  • remove timeouts from 'qemu-img snapshot' commands.
  • remove hardcoded blowfish cipher
  • purge snapshots before delete volume

- libpve-common-perl (1.0-39) unstable; urgency=low

  • remove hardcoded blowfish cipher
  • fix bug #273: retry flock if it fails with EINTR



and afterwards the custom authorized ssh-keys are gone and proxmox removed everything except for a single key which is the localhost itself to be authorized using the authorized_keys mechanism. the same is true for several other users (eg. http://forum.proxmox.com/threads/11870-Updates-for-Proxmox-VE-2-2?p=64839#post64839 )

it is as simple as upgrading the PVE system from four days ago to the current release and one looses all authorized_keys (which is bad).

regards
hk


 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,523
908
163
on a standard installation, there is no "/root/.ssh/authorized_keys" - you created them manually.
as soon as you configure a cluster with "pvecm create ...", the symlink is created (to /etc/pve/...) and the manually created authorized_keys is archived (authorized_keys.org).

(so your manual configuration will always "break", as soon as you create a cluster.)

with the last update, we create the keys also on single nodes (for another reason), therefore you see this behavior now. doing manual customization cannot be handled automatically in all cases. your existing authorized_keys file is archived, see authorized_keys.org.

to fix, you just need to copy the archived keys to the right file/location.

the right way:
add your keys to /etc/pve/priv/authorized_keys (/root/.ssh/authorized_keys is just a symlink on Proxmox VE 2.x.)
 

Nemesiz

Well-Known Member
Jan 16, 2009
678
48
48
Lithuania
I lost record too. /root/.ssh/authorized_keys was a file. Yesterday I did update and /root/.ssh/authorized_keys become soft link.

Edit: authorized_keys.org are soft link too, not a backup or original file

lrwxrwxrwx 1 root root 29 Nov 16 22:04 authorized_keys -> /etc/pve/priv/authorized_keys
lrwxrwxrwx 1 root root 29 Nov 16 22:00 authorized_keys.org -> /etc/pve/priv/authorized_keys
 

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
17,113
513
133
Austria
www.proxmox.com
There are some recent changes in the authorized key file parser - maybe there is a bug there.

We will do more tests.
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,523
908
163
we just uploaded a fix for this to our stable repo.

how it works:

if there is already a /root/.ssh/authorized_keys, its is merged now and the right symlink is created.
pls test.

Code:
ls -alh

total 20K
drwx------ 2 root root 4.0K Nov 19 11:19 .
drwx------ 4 root root 4.0K Nov 19 11:15 ..
lrwxrwxrwx 1 root root   29 Nov 19 11:19 authorized_keys -> /etc/pve/priv/authorized_keys
...
 

tmikaeld

Active Member
Jan 21, 2013
69
2
28
I have this problem right now, and i updated to 2.3 before creating cluster.

Are the keys saved only on master node?
Because on the node i can't write to either authorized_keys nor to /etc/pve/priv/authorized_keys

When quorum doesn't start, that's caused by no cluster connection right?
 

tmikaeld

Active Member
Jan 21, 2013
69
2
28
I have this problem right now, and i updated to 2.3 before creating cluster.

Are the keys saved only on master node?
Because on the node i can't write to either authorized_keys nor to /etc/pve/priv/authorized_keys

When quorum doesn't start, that's caused by no cluster connection right?

I got it working, cleaned out /etc/pve folder in rescue mode.

Now both server are up, they are authenticated but in the manager interface the other node is red and when i click it i get asked to login but can never successfully do that.

Any advice to why?
 

tmikaeld

Active Member
Jan 21, 2013
69
2
28
I got it working, cleaned out /etc/pve folder in rescue mode.

Now both server are up, they are authenticated but in the manager interface the other node is red and when i click it i get asked to login but can never successfully do that.

Any advice to why?

Solved with: pvecm updatecerts

And then /etc/init.d/apache2 restart
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!