just fyi: you can define pools as 'child' pools by using a slash in the name,
so you can create the pool 'foo' and the pool 'foo/bar'
in the gui they won't be shown nested, but i think for the purposes of permissions this should work.
please don't double post your questions, see my answer here: https://forum.proxmox.com/threads/nvidia-vgpu-software-18-support-for-proxmox-ve.163153/page-2#post-811872
i generaly layering cow filesystems should not break anything, but it will slow down (since each layer has to do cow)
Personally, i'd use only one cow layer, be it in the vm or at the hardware storage level.
In any case, you could run tests in...
the vCS license is only for nvidias ai enterprise driver/software (https://www.nvidia.com/en-us/data-center/products/ai-enterprise/) which is distinct from vgpu
(not sure why that site still has them up for sale...)
note that AI Enterprise is...
yes, see the link in the first post and this table by nvidia: https://docs.nvidia.com/vgpu/gpus-supported-by-vgpu.html
i don't think there is a "best" way, but you can buy it in a shop like you linked, or if you're working with a vendor to get...
we're using a (slightly) modified ubuntu kernel, so if that card is supported in a recent version of that kernel (6.14, 6.8; depends on the pbs/pve version and the kernel version you installed) then it should work
yes, i agree that having a more 'intelliget' way of selecting how to do the actual migration would be good, would you mind opening a feature request on our bugtracker? https://bugzilla.proxmox.com
it may involve some changes on PVE too, so not...
Hi,
my guess is that on the PDM you use the cross-cluster migration?
If yes, the hostname/ips used in the remote configuration of the PDM is used for the migration, so which network is used depends on the network/dns/etc. setup
hmm that would probably only make sense if we autogenerated the token, else removal of a remote might mean that some important token is gone
nonetheless, would you mind opening a feature request on our bugzilla? https://bugzilla.proxmox.com
i think the correct command here would be
pvesh delete /cluster/acme/account/pvecluster
so the name in the path
yes on deletion we try to unregister that account.
if you delete the file from /etc/pve this should not break our stack, but...
ah ok, so that too is missing..
don't worry, there is still a way that should work ;)
you can directly use the api or use the cli helper tool 'pvesh' (which exposes the whole api on the cli)
pvesh create /cluster/acme/account --contact...
hi, i answered a bit more detailed in your other thread: https://forum.proxmox.com/threads/pve9-in-place-upgrade-failed-due-to-mellanox-nvidia-networking.173385/#post-806883
please let's continue this discussion there
hi,
just for your information, the nvidia driver does not have to directly support debian 13, but actually the kernel version is more the reason here. Since we base our kernel on ubuntus and not on debians this is different from debian 13.
This...
see the help output on the cli:
# pvenode help acme account register
USAGE: pvenode acme account register [<name>] {<contact>} [OPTIONS]
Register a new ACME account with a compatible CA.
<name> <name> (default=default)...
ah ok i get what you mean now.
yes this seems this was overlooked in the datacenter view
you can however register as many custom accounts you want on the cli with
pvenode acme account register