OK - I've confirmed that using an anonymous bind does return a list of users.
For example, here is the ldapsearch command - this successfully returns a list of all users in the G Suite domain.
LDAPTLS_REQCERT=allow LDAPTLS_CERT=Google_2022_05_22_3494.crt LDAPTLS_KEY=Google_2022_05_22_3494.key...
I have a Proxmox cluster, that uses LDAP to authenticate against a G Suite domain.
Previously, users were able to login successfully through their LDAP credentials.
However, recently I updated some packages (e.g. libpve-access-control) in order to try with the new LDAP sync feature (discussion...
I also checked the audit logs for the G Suite SecureLDAP service - these are the events associated with my running:
# pveum realm sync "example.io" --dry-run --full --purge --scope both
In this case, the dry-run didn't return any users, and proposed deleting all the users I'd manually...
Got it - I'll have to check about the anonymous bind thing.
I do know that running the ldapsearch command from a Linux box like so works - just using the certificate files, and no credentials:
$ LDAPTLS_REQCERT=allow LDAPTLS_CERT=Google_2022_05_22_3494.crt...
My domain.cfg config should be in the above post, for reference.
As far as I'm aware - it should be correct, as existing LDAP users are able to login successfully to Proxmox via the LDAP realm.
Or are there perhaps additional attributes needed for the new Proxmox sync feature to work?
I found...
Hi Dominik
Thanks for the detailed info!
I just did an apt update and apt dist-upgrade on my cluster - I did see there was an update for the "libpve-access-control" package from 6.0-6 to 6.0-7, which is the version that has the new sync CLI.
It took me a while to realise the command is "pveum...
Hi,
I saw on the pve-devel mailing list last month (April 2020) there is talk about some new LDAP sync functionality for users and groups:
https://pve.proxmox.com/pipermail/pve-devel/2020-March/042097.html
https://pve.proxmox.com/pipermail/pve-devel/2020-April/042938.html...
I have a 3-node Ceph/Proxmox cluster.
I added some OSDs recently, using a separate volume as the DB/WAL device.
However, it turns out I didn't budget enough for the DB/WAL volumes, and I now need to re-create the OSDs from scratch.
Via the Web UI, I am able to select each OSD, and then go to...
Hi,
ZOL recently merged this patch:
https://github.com/openzfs/zfs/pull/10163
which seems to promise significant performance improvements for ZVOLs =).
Will this have a positive impact on those of us who run Proxmox on ZFS, with VMs on ZFS?
Thanks,
Victor
Of course - I filed https://bugzilla.proxmox.com/show_bug.cgi?id=2698 with some initial thoughts.
Let me know what you think!
Would you need me to do a mockup?
We use Ceph to provide VMs for several internal teams.
We'd love this feature (namespaces) to so we can implement quotas, and prevent any one single team from using up all the storage (thread)
I saw that QEMU 5.0 is now in pve-test =).
Hi,
Oh great - this namespace feature looks really neat.
I saw in the thread you linked that it was pending QEMU 5.0.
However, I am running pve-test, and QEMU 5.0 now seems to be released there.
Does that mean RBD quotas should work now?
Is there some Proxmox Web UI integration that needs...
Is there some way of implementing storage quotas per user, if you're using Ceph RBD for VM disk storage?
For example, limit users in group A to a maximum of 1TB, or limit group A as an aggregate to 1TB etc?
There is nothing about corosync is any of the syslogs in any of the four nodes.
From the crash message - are you thinking the issue is in corosync?
I just saw this earlier thread - based on that I installed the systemd-coredump package, and edited /etc/systemd/journald.conf to add...
Hi,
I have a 4-node cluster running Proxmox/Ceph.
In the last week - two of the nodes have gone down multiple times - each time, the nodes seems responsive - however, it disappears from the cluster.
On the console I see a message about a segfault in pmxcfs
Here is the output of pveversion...
I have something similar - we have a 4-node Proxmox/Ceph cluster, and are not using HA currently.
I need to reboot nodes for things like kernel updates.
Each of the 4 nodes will have some running VMs, and some stopped VMs.
How do I pause all running VMs, then have those ones automatically...
HI,
I have a four node hyperconverged Proxmox/Ceph cluster.
Is there any way to view the current ratio of real to actual storage, for thin-provisioning?
(I.e. I want to see what benefit thin-provisioning is giving me. currently)
Thanks,
Victor
OK, so I ended up setting a native VLAN on my switch, so that untagged traffic gets tagged with ID 12 (which is the VLAN for normal Proxmox traffic - 15 is for Ceph, 19 is for Corosync).
I noticed that there is the option to create a VLAN in the Proxmox GUI:
Anyhow, I have created my two...
I hit this same issue with a SuperMicro 2124BT-HNTR as well.
By default, the boot mode is set to "DUAL" - if you try to install Proxmox using ZFS, you will get this error on reboot:
However, if you set the boot mode to "UEFI" - and re-run the installation, it works.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.