We're using the LDAP sync feature (https://forum.proxmox.com/threads/new-ldap-sync-feature-questions-around-full-sync-and-eta.69260/) - and I've noticed that it doesn't seem to take into account whether a user is active or not (i.e. suspended in Google Workspace).
So it appears to add...
OK, I have a Proxmox 6.4 cluster I set up.
The boot volume is a mirrored ZFS setup, with 2 x 1TB disks (/dev/nvme1n1 and /dev/nvme5n1).
I fat-fingered it, and accidentally ran ceph-volume lvm zap on one of the two disks (/dev/nvme5n1) accidentally *sad face*.
The ZFS volume itself on the...
Does anybody know if there's a hard limit on the number of users that the LDAP sync in Proxmox can do? Or is the below a bug?
I have a Proxmox 6.4 cluster, and I'm trying to an LDAP sync as follows:
pveum realm sync "anguslab.io" --full --purge --scope both
However, after running...
I have just setup a new Proxmox system that I've recently set up on Proxmox 6.4.
Network card is a Mellanox ConnectX-5 with 100Gbe ports.
I've also installed the latest 5.11 kernel to test this out. I know that 5.11.x was working on previous installs on this system.
However, when I did an...
Ceph Pacific (16.2.0) just got announced:
There should be Debian Buster packages available.
Any chance we can get this in the testing Ceph repo soon? Would love to kick the tyres.
We have a test lab for users to spin up VMs on demand.
Is there an easy way to call backup on all created VMs, without needing to explicitly specify the set?
(As a hack, I assume we can enumerate the VMs using the API, and then call backup individually on each one - but I wondered if...
I have a 4-node Promox 6.3 cluster, running Ceph for VM block storage. Underlying disks are NVMe disks.
Each node has a single-port Mellanox NIC, with a 100Gbe connection back to a 100Gbe switch.
This switch is then connected to an upstream router via a 10Gbps port. (However, I'm not sure the...
I've rebuilt a new Proxmox 6.2/Ceph 15.2.6 cluster, and I noticed this error both in the UI, and under ceph health:
# ceph health detail
HEALTH_WARN Module 'volumes' has failed dependency: No module named 'distutils.util'
[WRN] MGR_MODULE_DEPENDENCY: Module 'volumes' has failed...
I am trying to setup a new VM with Windows 7 under Proxmox. As per the wiki, I am attempting to install the following four drivers - balloon, NetKVM, virtioscsi and virtioserial.
However, I am having issues finding a Virtio driver version that actually works.
If I use virtio version...
I've just setup a new Proxmox 6.2 cluster with Ceph Octopus.
I've added a new Ceph RBD pool, and made sure to enable to add_storages:
# pveceph createpool vm_storage --add_storages 1 --pg_num 1024
However, the new pool isn't appearing on the dropdown of available storage - I only see...
I've setup a new 4-node Proxmox/Ceph cluster.
I have run pveceph install on each node.
I have also setup ceph mon and ceph mgr on each node.
Here is the output of /etc/pve/ceph.conf:
# cat /etc/pve/ceph.conf
auth_client_required = cephx
auth_cluster_required = cephx...
I'm trying to setup a new 4-node Proxmox/Ceph cluster.
Each node has 6 x NVMe SSDs, as well as an Intel Optane drive (used for WAL/DB).
I have partitioned each NVMe SSD like so:
parted /dev/nvme6n1 mklabel gpt
parted -a optimal /dev/nvme6n1 mkpart primary 0% 25%
parted -a optimal /dev/nvme6n1...
I have a 3-node Promox (6.2) cluster running Ceph (Octopus) as well.
For some reason - I have two packages that refuse to update.
Proxmox (and apt) tell me there are updated versions - but every time I run an Upgrade (either via the Proxmox UI) or using apt dist-upgrade, those two refuse to...
Currently, I'm using the normal Proxmox ISO installer, to setup some small Promox/Ceph clusters for testing.
However, I was wondering what are the pros/cons of this, versus installing vanilla Debian, and the Proxmox packages on top of that?
One of our pain points is that Proxmox ISO...
I'm a bit confused about what the current TRIM/discard support
My understanding is that enabling the "Discard" checkbox in Proxmox will enable the VM to call "TRIM", and return back unused blocks - which on storage like ZFS (which supports thin-provisioning), can reduce the amount of...
I have a 4-node Proxmox cluster.
I'd like ship my Proxmox Cluster logs and Task logs to an offsite logging server (e.g. Graylog, Loki etc.) - this allows us to keep track of which users did what actions, and when VMs were created (creation time isn't currently exposed - FR to add this - but for...
We run a small Proxmox cluster for lab/testing.
Users have access to the Web UI, to create/spin up new VMs, and check on the status of the cluster.
Is there some permission/ACL we can use to block direct shell access to the Proxmox node?
(We are looking to roll out Teleport, or something...
I am running Proxmox 6.2 I have added a CIFS storage server to the Proxmox config.
The NAS is running FreeNAS, and exporting the ZFS datastore via SMB (CIFS).
The server is listed in /etc/pve/storage.cfg:
I have a Proxmox cluster, that uses LDAP to authenticate against a G Suite domain.
Previously, users were able to login successfully through their LDAP credentials.
However, recently I updated some packages (e.g. libpve-access-control) in order to try with the new LDAP sync feature (discussion...
I saw on the pve-devel mailing list last month (April 2020) there is talk about some new LDAP sync functionality for users and groups: