can you show where you get these number (e.g. the commands + output or screenshots) ?
also do you maybe have snapshots on the vm ? (that + data written will consume more disk space than the disks ofcourse)
did you maybe do backups on the same disks?
since PVE only sees the sockets assigned to it, you can just assign e.g. one socket and use a single socket subscription
note that for best performance you probably should match the host architecture to the guest (this depends a bit on the hypervisor though)
note that we in generally don't...
can you post the content of the journal/syslog?
this only happens when the 'pvestatd' daemon gets into a bad state and stop updating the info on the guests/nodes/etc.
yes the pci warnings are new and intentional. Previously we tried e.g. to reset the device, but failed silently not knowing if it worked or not
those warnings are not bad per se, but could indicate a problem if e.g. something in the guest is not working right
what exactly did not work for you?
an example parameter set would be
--scsi0 local:0,import-from=/path/to/imagefile
note that this can only be done as root@pam (since arbitrary paths can be given here) and there might be additional checks regarding format, etc.
to do the same thing as the ui, you have to use the 'template' post api call:
https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/qemu/{vmid}/template
this does the necessary conversions
while the underlying sysfs paths and exact mechanism changed a bit, we opted to map it to our old interface on the pve side
so the pve config is the same as previoiusly (so hostpci0: 000:XX:YY.Z,mdev=nvidia-123) even though there are technically no 'mediated devices' involved
we did this...
sorry but that's not entirely correct
the datacenter overview shows the usable space of all storages (but tries to count shared storages only once, you can configure which ones are counted in the 'my settings' panel in the menu on the top right)
the ceph overview shows the 'raw' storage.
so in...
do you also have the logs from before the issue starts? e.g. from when it's working good to when it's showing the question marks?
did it work again after a restart?
also the storage config would be good (/etc/pve/storage.cfg)
generally if the nfs server or the connection is flaky, pvestatd...
pbs should already have that kind of info, at least we're parsing it correctly AFAICS. If it's not working for you, please open a bug on https://bugzilla.proxmox.com with the output of 'smartctl -A -j </dev/path/to/nvme>' in it
if you bind the card again to the original driver after the vm is shutdown, you should only need to restart the container.
it may happen (depending on the hardware, this is nothing we can influence really) that the card does not want to rebind to the driver properly, then your only possibility...
the storage gets a question mark if the pvestatd daemon is not able to collect stats for > 2 minutes. so either check if pvestat is not hanging crashing on the relevant node, or if stats collection works for it (e.g. look in the journal/syslog if there is any issue with the storage)
you have to restart the containers currently, for the following reason:
the device node you pass through to the container only exists when the 'real' driver is loaded (e.g. nvidia), but one needs to remove the driver from the device to pass it through to a vm
also the device passthrough for...
currently if you want to reuse the datastore, you'd have to edit the config file (/etc/proxmox-backup/datastore.cfg) manually otherwise the api will try to recreate it (as you saw)
AFAIK there is a plan to make that more user friendly and simply let it reuse a datastore if it detects one
hi,
for how to limit zfs memory usage see:
https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage
if the vm really needs 55GiB, and there is no other vm running on the system, you should probably limit zfs to ~6-7 GiB (so theres still ~2GiB left for the host)
if there are...
i sent an updated fix that should now fix both the cannot reset and bind to vfio issues: https://lore.proxmox.com/pve-devel/20241108093300.1023657-1-d.csapak@proxmox.com/T/#mfe1b446a445313142686ed4ad0b64d0be759f877
if anybody could test it would be great :)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.