mhmm did you try the hints from here? https://www.tsunderechen.io/2021/11/OVMF-PCIE-passthrough-with-large-VRAM-GPU/
i.e. adding via args:
qm set ID --args '-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536'
as i said AFAIU there was a plan to use them, but that never came to fruition. and looking at the git history on our side there was not much activity besides bumping and syncing with upstream (which is not that much work...)
sending patches upstream, even for things we don't use (especially if...
i guess we wanted to use it and prepared the package, but never got around to it, at least i can't remember any time we would have used it (and we definitely don't use it at the moment, so it's just a remnant from an old approach probably)
there is "our" fence-agent-pve (on git.proxmox.com)...
do you want to remove the old kernel or the driver? there is AFAICS only one version of the dkms driver installed (in /var/lib/dkms/i915-sriov-dkms/). how to remove that depends on how you installed it (probably refer to the docs of the driver itself)
to remove an older kernel you can do 'apt...
mhmm there seems to be some overlap of package names
there is the debian package 'fence-agents': https://packages-picconi.debian.org/bookworm/fence-agents
which is the upstream https://github.com/ClusterLabs/fence-agents
in 'sid' it was split into multiple packages: see...
hi,
i looked at it, and it seems the library returns some unexpected data (it sets the PVolTag flag for the medium transfer call, but does not include the volume tag then). Sadly i cannot find the scsi reference for that changer type (storeonce) so
i cannot say if this is expected behaviour for...
this file would probably give more hints as to what exacrtly fails, but i think in general the dkms driver may not be compatible with the 6.2 kernel? since it seems the build for 6.8 works, only for 6.2 it doesn't
just FYI, the reason for this is that the underlying framework we use for the ui (ExtJS) does this by default for most of the ui, and we have to manually disable that behaviour where we don't want it (no global 'off' toggle) also it does make sense sometimes (e.g. grid headers, anywhere where...
probably a bug in the app that does not anticipate the vm configurations you have set, could you please open a bug on https://bugzilla.proxmox.com ? there we can discuss/track that better
das sollte schon so passen, aber da muss man beim server vendor nachschauen/fragen
passt, kurze Rückmeldung dann ob das die situation gelöst oder verbessert hat wäre natürlich super ;)
hat natürlich eine viel bessere single core performance als euer Server (5GHz vs 3GHz; noch gar nicht eingerechnet dass da ein paar CPU Generationen dazwischen liegen (alte Skylake server CPU von 2017 vs viel neuere Alder Lake CPU von 2021)
ich nehme an du meinst QVO ?damit werdet ihr auf jeden...
Hi,
ein paar Gedanken von mir dazu (andere können/sollen aber ruhig noch ihre Erfahrungen/Meinungen teilen ;) ).
1. Wie genau sieht denn die Hardware der aktuellen Workstations aus? Je nachdem womit man vergleicht kann die Hardware langsam oder schnell sein.
2. Remote arbeiten über RDP kann...
i would not think that this is a kernel issue, can you post the output of 'dmesg' the two vm configs (qm config ID) and your versions (pveversions -v) here?
hi, what versions do you have?
proxmox-backup-manager versions --vebose
also can you post the output of the following commands?:
sg_raw -r 64k <PATH_TO_CHANGER_DEVICE> B8 11 00 00 ff ff 00 ff ff ff 00 00
sg_raw -r 64k <PATH_TO_CHANGER_DEVICE> B8 12 00 00 ff ff 00 ff ff ff 00 00
sg_raw -r...
do they report as different pci devices? if yes, are they in different iommu groups? if no to any of those questions, the i fear it's not possible.
you can "passthrough" the individual disks though, it's not as elegant, and you don't get direct access to the hw in the guest (so no smart, etc.)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.