Haswell-IBRS fails to boot - host missing bit 4&11
Haswell-noTSX-IBRS boots.
KVM64 boots as well. (Seems to have better performance)
KVM64 seems to have better performance; but this is purely subjective, to actually measured. Either option definitely performed better than x89-64-v2-AES.
Now...
It was imaged from a real machine, so I'm stuck trying to make it work per the image.
I'm willing to futz with it, but just need to know where to start.
PVE 8.0.4 hosting a W11vm that working just fine on a x86-64-v2-AES processor type.
However, I would like to change the processor type to host or at least x86-64 v4 for the improved instruction set.
Just changing the processor causes the W11 VM to bluescreen or not even boot.
Has anyone done...
Thank you for this nudge in the right direction.
One thing I didn't realize in my original post was the need for a guest's application access to the device /dev/video0 (in my case). And, "most" applications using web cams usually need /dev/video0 owned by root:video.
So the answer lies in...
PVE 7.1-10
On the host:
root@pve:~# lsusb
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 005: ID 046d:0990 Logitech, Inc. QuickCam Pro 9000
root@pve:~# ls -l /dev/bus/usb/003/005
crw-rw-r-- 1 root root 189, 260 Mar 26 21:18 /dev/bus/usb/003/005
root@pve:~# cat...
Yeah, not seeing it. Both the VM disk and the CT Volume are on the same zfspool.
Seems like they are two completely different methods of providing storage.
Maybe I'm not be searching for the right words or something.
PVE 7.1-10
I'm moving some VM's to LXC's. They all have large storage disks attached (>4T)
What's the fastest way move the content from the VM's disks to the CT's volume?
PVE 7.1-10
I'm noticing that the Disk IO graph on the Summary page of an LXC does not reflect activity on a Mount Point.
Is this by design?
Is it reflective of the root dir only?
Where/how can I isolate Disk IO of a specific LXC?
I need to run multiple Linux desktop environments as VM's and need maximum desktop performance.
If I install a NVIDIA GPU normal graphics card (as opposed to headless GPU) on the server/host instead of the built in graphics card and do the pass-through to the LXC:
1- Will the LXC desktop...
Well I'll be dipped, it was that easy. I didn't even have to know the ID's of the drive, it just found them by the pool name that the drives were already associated to. Pretty cool.
Once I shut down all the VM's that were using the pool and disabled the pool in the proxmox web ui under...
Thanks for the direction here.
I'm having great success with this: https://forum.proxmox.com/threads/gpu-general-concept-questions.97947/post-450823
Your application is all headless, do you have any experience with GUI performance in those LXC's? (See question in above post)
Okay, having tremendous success here.
Am able to pass through the RTX6000 to multiple LXC's each adding it's encode/decode load as needed. Pretty cool.
The pixie dust comes in ensuring you:
- Install the same version of the Nvidea driver on the host as in the container
- Install the Nvidea...
I must be doing something wrong.
It installs okay and runs as root but I can't add any users and several (if not all) of the system files that would normally be owned by root:root are owned by ubuntu:ubuntu.
Not sure what that's all about.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.