API Token ACL for VM wildcards/no-delete

Blackclaws

New Member
Apr 16, 2024
8
0
1
So I've got a bit of an interesting situation on my hands.

I want to allow an API token to create and manage VMs that it has created but _only_ those. I'm using a terraform integration and I don't want it to by accident go nuts and delete anything it shouldn't delete and I also want to restrict which VMs the people using terraform can operate on.

I've set it up so that the base user has ACLs on /vms and I've added ACLs for the APIToken to /vms/${vmid}

Interestingly enough ${vmid} doesn't have to exist yet! This is good, because it allows us to proactively give permissions to certain ids which the token can then manage.

However if we're talking about many vms this becomes a bit ... annoying.
I'd rather have wildcards in the ACLs so I could say: /vms/102* or similar.
This is a bit of weird ask I guess because its entirely unclear how to properly scope this. But having a way to allow access to a number values at the same time would be great for ACLs.

The other thing I've noticed is that ACLs are cleaned up when a VM its referring to is deleted. While this makes sense usually in this case its a bit problematic because the token will simply lose access to that VMid and I have to manually go in and recreate the permission so the VM can be created again, this is suboptimal.

I think what would be a potential solution to this problem is if I could create a templated ACL that allows for a range of values.

Resource Pools _partially_ solve the problem but only for existing resources not the creation of new resources.

Any ideas on how to solve this using the existing systems would be great :)

If people agree that having a system in place that allows permanent ACLs for a range of VM ids then I'd be happy to file a feature request.

UPDATE:

What works fine if you don't care about specific VM ids being allowed is to create a pool and assign permissions on that pool. In a pool any VMid can be used that is free. You cannot prevent (I don't see a way anyway) any id from being used here but that's a small drawback for having a small area that your scripts can run in without damaging the rest of the cluster.
 
Last edited:
Resource Pools _partially_ solve the problem but only for existing resources not the creation of new resources.
What failes with the creation in the case with pools exactly? Coudn't you combine the allow VM ID xyz permission, create the VM in the pool and afterwards delete the specific VM permission?
 
Ah, yes that works but if you delete the VM afterwards the permission doesn't stay with the pool.

The case here is if a resource is ever destroyed then there is no way to recreate it without manually adding a permission for that specific vm id again.
 
AFAIK, as long as you have a permission to create VMs in a pool, you can just do so. The permission is then tied to the pool and all VMs in the pool will inherit the permissions. You could create a pool named as the user and assign permissions to the pool to the user. That is not a super elegant solution but might solve the problem you're having.

With the advent of nested pools (UI-part hopefully soon), you can also group them e.g. users/<username1> and users/<username2> in addition to other pools not directly linked to users.
 
That's actually a great solution if a bit cumbersome, but we don't have many of those users. I actually wasn't aware that a pool allowed you to create VMs without having access to the VMId itself. I guess I just didn't read the manual well enough. Thanks again for proving me wrong here. I'll update the top post with that info.

The only thing missing really is that you cannot restrict VMIds but if arbitrary ids are fine then that is a small price to pay.
 
Last edited:
The only thing missing really is that you cannot restrict VMIds but if arbitrary ids are fine then that is a small price to pay.
Yes, that would be cool, too, as well as other restricting on ressources like RAM and CPU. Disk can be used with quota, even if it it also a bit cumbersome depending on the storage.