[SOLVED] Deleting user 'delete user failed: cannot update tfa config'

chrispage1

Member
Sep 1, 2021
90
46
23
32
I've setup a new user from Datacenter > Permissions > Users

However on creating the user and attempting to set the password I get change password failed: user 'Chris' does not exist (500).

If I then attempt to delete the user I get the below:

Code:
delete user failed: cannot update tfa config, following nodes are not up to date: cluster node 'pve02' is too old, did not broadcast its version info cluster node 'pve03' is too old, did not broadcast its version info cluster node 'pve01' is too old, did not broadcast its version info (500)

This is a brand new cluster - I really can't see what might be causing this... any ideas?

Thanks,
Chris.
 
hi,

are all the nodes up to date? check and compare pveversion -v output between the nodes, and upgrade the older ones.
 
Hi oguz, thanks for your message. Everything is freshly installed, up to date and quorum is happy. I'm really not sure what the problem is here...

pve01:

proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-4 (running version: 7.1-4/ca457116)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.13.19-1-pve: 5.13.19-2

pve02:

proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-4 (running version: 7.1-4/ca457116)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.13.19-1-pve: 5.13.19-2

pve03:

proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-4 (running version: 7.1-4/ca457116)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.13.19-1-pve: 5.13.19-2
 
which repository are you using?

can you check if apt update && apt list --upgradable shows you any upgrades?
 
Hi oguz, I'm using the enterprise repository. There were some updates available so I've updated and rebooted all three nodes.

I can now add a user no problem (or at least it would seem)

1637859261002.png

However, when I go to set the password, I get change password failed: user 'chris' does not exist (500)

Really not sure why I'm getting this?
 
However, when I go to set the password, I get change password failed: user 'chris' does not exist (500)

Really not sure why I'm getting this?
did you add a user chris on the node? beware that for users in pam realm you will have to manually create those users on the server (e.g. with the useradd command)
 
Hi oguz.... that would explain it! I assumed the GUI added them all. Sorry for the confusion and thanks for your help!

Chris.
 
glad to be of help! please mark the thread [SOLVED] by editing the title :)
 
Hey,

0.: is it okay to "revive" a thread already marked as solved?

Because i think the other issue that popped up, being unable to delete a user, is still present.
Yesterday we had the same issue - while creating accounts i miss clicked and created an account for the colleague as "foo@pam", and was unable to delete it afterwards with the already cited "cannot update tfa config (...)" error.

I manually changed the account to "foo@pve" in /etc/pve/user.cfg and we were able to set a password.
Afterwards we tried to setup 2FA for this and an other account (that was directly created as bar@pve), but that also failed with an "cannot update tfa config (on all nodes)" error for both accounts (also the build-in root@pam).

To rule out i foo-ed something up when manually edited user.cfg i just set up a new proxmox (in VM), where initially deleting users and setting 2FA worked, but did no longer work when i initialized this single node to form a cluster; then the "cannot update tfa config" error also showed up.

Is there an underlying issue when checking this tfa-config stuff?

I dived into the code already a bit, the error is rised from PVE/AccessControl.pm when trying to get "version-info" out of the kv-storage in the "assert_new_tfa_config_available()" function, but i did not yet figure out how to access the kv-storage to test / look for the contents or where the nodes are supposed to fill in that info.

Regards,
Matthias / RBG
 
I dived into the code already a bit, the error is rised from PVE/AccessControl.pm when trying to get "version-info" out of the kv-storage in the "assert_new_tfa_config_available()" function, but i did not yet figure out how to access the kv-storage to test / look for the contents or where the nodes are supposed to fill in that info.
you need to upgrade your nodes software versions, the error message says it all :)

also generally it's not recommended to manually edit user.cfg file (not without backups at least!)
 
you need to upgrade your nodes software versions, the error message says it all :)

also generally it's not recommended to manually edit user.cfg file (not without backups at least!)
That Cluster is not yet in production, it would be a minor nuisance to reinstall but not a problem : )

I have no subscriptions for those nodes yet (and the other cluster with subscriptions is still at PVE 6 : /, there user deletion works just fine), but the non-subscription software should be up to date - that's the one of the first things i checked yesterday.

Regards,
Matthias / RBG
 
That Cluster is not yet in production, it would be a minor nuisance to reinstall but not a problem : )
you don't need to reinstall, you just need to upgrade packages :) make sure the pveversion -v output matches for all the nodes in that cluster.

and the other cluster with subscriptions is still at PVE 6 : /, there user deletion works just fine
yes the backend changed a bit with the new version, that's why you get the warning.

but the non-subscription software should be up to date - that's the one of the first things i checked yesterday.
make sure to check apt update && apt list --upgradable on all the nodes :)
 
you don't need to reinstall, you just need to upgrade packages :) make sure the pveversion -v output matches for all the nodes in that cluster.
Matches between the nodes, output from one node below:
Code:
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.11: 7.0-10
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
ceph: 16.2.7
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
yes the backend changed a bit with the new version, that's why you get the warning.


make sure to check apt update && apt list --upgradable on all the nodes :)
Code:
apt-get update
Hit:1 http://*internal_mirror*/debian/debian-security bullseye-security InRelease
Hit:2 http://security.debian.org bullseye-security InRelease                                                                                   
Hit:3 http://download.proxmox.com/debian/ceph-pacific bullseye InRelease                                                                       
Hit:4 http://download.proxmox.com/debian bullseye InRelease                                                                                     
Hit:5 http://*internal_mirror*/debian/debian bullseye-updates InRelease                                 
Hit:6 http://*internal_mirror*/debian/debian bullseye InRelease                                         
Hit:7 http://*internal_mirror*/debian/debian bullseye-backports InRelease                               
Hit:8 http://ftp.de.debian.org/debian bullseye InRelease                                                                                       
Hit:9 http://ftp.de.debian.org/debian bullseye-updates InRelease
Reading package lists... Done
root@*pve-host*:~# apt list --upgradeable
Listing... Done
Should be up to date, as our internal mirror is usually just hours behind - and serves only the standard debian stuff, not the proxmox packages.

Regards,
Matthias / RBG
 
Should be up to date, as our internal mirror is usually just hours behind - and serves only the standard debian stuff, not the proxmox packages.
interesting...

to read the kv store you can try the following oneliner to print the contents of version-info:
Code:
perl -e 'use Data::Dumper; use PVE::Cluster; PVE::Cluster::cfs_update(); print(Dumper(PVE::Cluster::get_node_kv("version-info")));'

it might also help to restart pvestatd service on all nodes (should broadcast version to the cluster)
 
Last edited:
interesting...

to read the kv store you can try the following oneliner to print the contents of version-info:
Code:
perl -e 'use Data::Dumper; use PVE::Cluster; print(Dumper(PVE::Cluster::get_node_kv("version-info")));'

it might also help to restart pvestatd service on all nodes (should broadcast version to the cluster)
Hi,

initially empty, {'*pve-host*' => '{"release":"7.1","version":"7.1-8","repoid":"5b267f33"}'' } after restarting pvestatd.
Deleting users and adding tfa data also works now.

I just checked the log, I did not reboot the nodes after creating the cluster (also not when setting up a lonely test node today morning) -.-
Thanks for your help!

Regards,
Matthias / RBG
 
Last edited:
Deleting users and adding tfa data also works now.
great!

I just checked the log, I did not reboot the nodes after creating the cluster (also not when setting up a lonely test node today morning) -.-
oh that explains it.

glad the issue was solved, you can mark the thread as [SOLVED] for others :)
 
I get the same issue for a pve account that was created before the cluster was made.

Code:
delete user failed: cannot update tfa config, following nodes are not up to date: cluster node 'Carmen' is too old, did not broadcast its version info cluster node 'Exadata' is too old, did not broadcast its version info (500)
 
Wanted to chime in here, i too experienced this "thing" now..
When running the oneliner that oguz provided it returned nothing, restarting pvestatd led to it returning values again and us being able to delete a user we've been trying to delete for a while.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!