We are in the process of planning and experimenting with a VMware -> PVE migration that will be performed at the start of next year. One concept we are struggling a bit with are the integer VMIDs that PVE uses to uniquely identify VMs/gues workloads.
To keep the different animals in our zoo tidily separate from each other, we would like to establish a strict convention for which particular sub-range of VMIDs is reserved for which kind of workload. The most trivial distinction is going to be templates (VMIDs between 100 and 999) vs. non-templates (VMIDs >=1000), but there's other things that we would like to keep separate, like user-facing VMs (VMIDs between 1000 and 9999) and purely internal infrastructure VMs (between 100000 and 999999, with prefixes in these IDs encoding various kinds of information).
What I am looking for to make this happen is a way to ask the PVE API not for "the next VMID that could be allocated", but for "the next VMID that could be allocated between lower bound n and upper bound n+m". Combing through docs, it seems like I could get that behavior by calling
right before each allocation of a new VMID (while ensuring a pipelined, strictly ordered execution of API calls).
My question is: Is this a sound idea, or does changing this setting frequently (i.e., up to several dozens of times a day) have the potential to hurt a cluster's availability/stability in any way? Or is there an alternative, better way to achieve our desired outcome of a "partitioned" VMID continuum?
To keep the different animals in our zoo tidily separate from each other, we would like to establish a strict convention for which particular sub-range of VMIDs is reserved for which kind of workload. The most trivial distinction is going to be templates (VMIDs between 100 and 999) vs. non-templates (VMIDs >=1000), but there's other things that we would like to keep separate, like user-facing VMs (VMIDs between 1000 and 9999) and purely internal infrastructure VMs (between 100000 and 999999, with prefixes in these IDs encoding various kinds of information).
What I am looking for to make this happen is a way to ask the PVE API not for "the next VMID that could be allocated", but for "the next VMID that could be allocated between lower bound n and upper bound n+m". Combing through docs, it seems like I could get that behavior by calling
Code:
pvesh set /cluster/options --next-id lower=$m,upper=$((m+n))
My question is: Is this a sound idea, or does changing this setting frequently (i.e., up to several dozens of times a day) have the potential to hurt a cluster's availability/stability in any way? Or is there an alternative, better way to achieve our desired outcome of a "partitioned" VMID continuum?