Correct. vTPM won't utilize a physical TPM. The only purpose for a "real" TPM would be secure boot on the host, which I do not believe PVE (yet) supports.
Roughly the avg. I discounted major outlier values and adjusted to the most consistent results across the drive type. It has performed well using such a method. I have all types (NVMe, SSD, HDD).
I benchmark all of my drives multiple times and then set a consistent value for all of the same type across the cluster. The variance between each is easily explained by differences at the time of benchmarking (which occurs automatically when you upgraded or installed Ceph)
Here is the Ceph Manual. To set a custom mclock iops value, use the following command:
ceph config set osd.N osd_mclock_max_capacity_iops_[hdd,ssd] <value>
What type of drives are these?
Do vGPU drivers allow host access to the gpu, ie retain ability to use the GPU in LXC containers? I have gpu that are used in multiple LXC containers and it would be nice to also leverage vGPU in VM, but I’m not sure if the drivers permit both.
One (possibly) minor bug:
The GUI throws a health warning that "Telemetry requires re-opt-in", even though I have opted in multiple times. This occurs on multiple nodes. A reboot doesn't clear the flag.
That doesn't change the underlying problem. The driver is from 2017 when the newest kernel available was 4.13. The driver has never been tested against Kernel 5 and simply may not work. I understand that support was dropped for Kepler, however you can use up to 470.129.06.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.