Hi,
I think you are trying to address multiple points here, but I'll add some piece information that might help in this discussion. First, removable datastores are planned and being worked on [1]. This would be needed to handle switching out USB drives like you describe, so maybe keep an eye on...
You should take a look at resource pools [1]. This should enable you to restrict specific users/groups to specific resources.
[1]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pveum_resource_pools
Not sure what you mean by “edge”. Anyway, the problem isn't about your hardware or host support vor IOMMU/PASID, but rather that your VM does not seem to provide that. At least that's what I gather from the GitHub Issue [1].
You can try to enable vIOMMU (which emulates IOMMU for your VM), as...
I'm not sure what you expect from us. A developer of the driver gave you a clear answer in both the issues that you created, and it clearly states that this use case is not supported yet. Specifically because AMD does not ship NPUs on their server level CPUs yet.
I'll just link to the second...
Hi,
that depends on how you want to use it. From what I can tell the storage can either be exposed via NFS or CIFS, in which case we would discourage using it. The main datastore should use fast NVMe-based storage to facilitate swift and stable backups. However, you could also use it as a...
Hey, neben der aktiven Unterstützung des Projekts bieten Subscriptions (keine Lizenzen) hauptsächlich die folgenden zwei Vorteile:
Zugang zum Proxmox Enterprise Repo
Support durch unser Enterprise Supportteam ab einem Subscription Level von „Basic“
Ohne eine Subscription können immer noch...
Alright, but be aware that this is quite risky.
I'm assuming you mean the boot drive of the VM in question. Can you provide the ID of the VM you want to recover? Your first post shows that you have disks for VMs 100 through 106. All these LVs are inactive, though. You could try to activate them...
That works just fine for me, can you share the whole configuration of the guest as well as pveversion --verbose?
Also, if you open the browser's console do you see any errors?
We added a check to see if your regex is valid with PMG 8, so this regex never really worked, the check just didn't exist.
What you could do is this: .*domain\.ext. .* will match any letter or digit zero or multiple times. You'll also need to escape the . as otherwise it matches any character...
Well ^* does not really make sense as a regex. * is a quantifier and needs something to quantify. It matches whatever is before it zero to many times. ^ just matches the beginnings of lines so this can only be matched ones per line.
What is it that you want to match with your regex?
Nein, dann werden beide Nodes als Community behandelt. In einem Cluster gilt immer das niedrigste Subscription Level für alle.
Ist dieser PVE Host Teil desselben Clusters? Das wäre kein sinnvolles Setup. Schon gar nicht in einem produktiven Umfeld.
Für das QDevice selbst wird keine...
Well, we have made some bad experiences with it over the years and often using OVS had very few benefits. Could you point out some of these features? That might help with understanding where you are coming from.
Well, it will depend on the VMs in question, but you should be able to follow our PCI Passthrough guide here [1].
[1]: https://pve.proxmox.com/wiki/PCI_Passthrough
QEMU provides such functionality under the name COLO (for "COarse-grained LOck-stepping") [1]. But last time we looked at it, it wasn't quite ready for prime time to my knowledge.
[1]: https://wiki.qemu.org/Features/COLO
These devices seem to currently be assigned the vfio-pci driver [1]. This is usually the case when you provide a PCI device as a virtual function to a VM. I am generally not familiar with these devices, but what do you intend to do with them?
[1]...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.