I'm sorry, dug into the issue deeper since it was some time ago, users are still in the groups but do not exist as user anymore. So problem is still there. I will report a bug
Fixed by enabling 'Remove vanished properties from synced users'. It was disabled. I expected that Entry: Remove vanished users and group entries would be enough. Is this stil a bug then or did i misinterpret these functions?
Within our proxmox cluster we have an Active directory sync only used for login of our users (no OU assignments to automate grouprights).
One of our users has been remove from the OU allowing him to login to Proxmox so the sync removed his username.
However the user is still in the usergroup he...
Hi Fiona,
You are correct, if i map the TPM disk first by shell the VM starts normally. Shutting it down again (it unmaps automatically) and start it again fails it again. But only on this node it works on 4 other nodes.
This was during a fresh start. VM shutdown, moved to faulty node and started again. Right after start it failed, HA tried again (same error) and then HA migrated/moved it (offline) to a non-faulty node where it started normally.
The requested output bothers me, i get errors while running the...
task started by HA resource agent
/dev/rbd0
TPM2_EvictControl failed: 0x14c
create_ek failed: 0x1
An error occurred. Authoring the TPM state failed.
swtpm_setup: Starting vTPM manufacturing as root:root @ Thu 15 Feb 2024 04:09:38 PM CET
swtpm_setup: TPM is listening on Unix socket.
swtpm_setup...
Same problem running TPM from Ceph after cloning(full) from template.
moved TPM to nfs share and VM starts normally.
Moved back to ceph, again failure.
Moved VM to another host and now it is working with TPM on ceph.. could there be any underlying hardware issue?
@VictorSTS
This issue has to be brought back to a misconfigurement within FRR, although we tested and checked many times after rebuilding the FRR config it is now working. Thank you for your help.
I know, like i said i tried almost everything to get it to work so somewhere i messed up quorum (by almost manually trying to add the 2nd node).
But that doesn't take away the main problem....
This is my first (native, not upgraded to) version 8 installation, maybe there is an issue, but not...
Since i've been testing a lot quorum was lost. I upped it again by forcing votes of node 1 to 7. But it also does not help for the hostname verification error.... I'm getting more and more puzzled. I have installed more systems (but without frr) and never had this problem.
root@node1:~# pvecm...
I start to suspect it has something to do with my first node (already in cluster) where there is no quorum yet. It cannot change anything in /etc/pve folder since pmxcfs keeps the files locked?
Hi,
yesterday i after many testing with the frr i reinstalled all 3 nodes from scratch and put up the frr rings again.
then i got stuck on this problem.
/etc/hosts (with pvelocalhost equally with the correct host of course)
127.0.0.1 localhost.localdomain localhost
10.14.14.1...
Hi @VictorSTS. We have the FRR working over gigabit and 10gigabit.
We have created the cluster but adding a node results in a hostname verification error. This is new to me since i started another cluster long time ago with ssh connected.
The error we get:
root@ProxMoxHost2:~# pvecm add...
we are working on a small Proxmox enviroment with 3 nodes and mesh network.
All servers have
2x 10GB copper in mesh for Ceph cluster network
2x 1GB copper in mesh (corosync)
2x 1GB free for exiting network to customer
2x10GB SFP+ not used yet. (might be used for VM traffic and Ceph public subnet...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.