Like CRCinAU, Ive issued SMTP certs successfully but a checktls.com test reports certificate failures due to self signed certificates.
Debug does indeed show the Proxmox Mail Gateway self signed certificate. I never issued API certs but do have SMTP certs.
Mail Gateway 8.0.7
I want to add that you should remove the PCI spoofing, in addition to changing frl_enabled to disabled (frl_enabled = 0), when you try the vGPU GRID driver in your profile override conf.
If you went the route you went, like I initially did, due to licensing costs, I would note that my...
Try the driver included in the vgpu KVM driver bundle that you downloaded for the host. It will give you 20 minutes before the driver disables features and starts acting like trash. But 20 minutes should be long enough to verify your needs. If that works then I can help you get licensing setup...
I can't answer to PVE 8 issues. I wont upgrade until drivers catch up for my network hardware. Wish we had an LTS edition..
Anyway, what driver are you using in the VM?
I think the frl_enabled config should be on (1) or off (0).
It looks like you're spoofing the PCI ID and NVIDIA may be making...
It looks like its a Win driver issue with kvm64 and vmx. Windows guest OS crashes once I add the vmx flag to cpu args. It wont even boot into the OS installer. On the other hand Linux handles it fine.
But again either it is your disks preforming slowly or something else going on that I can not...
Youve sparked my interest so I am installing a fresh Win 10 vm with the args added to a default kvm64 cpu
args: -cpu kvm64,vmx=on
I'll let you know what I find out.
Because I'm not on PVE 8 I cant tell you if kvm64 = x86-64-v2-[AES]
or if it aught to be args: -cpu x86-64-v2-[AES],vmx=on
vmx is the flags needed for nested virtualization.
args: -cpu kvm64,vmx=on
This does report vmx flags once added (and absent without that argument in conf) but you may need to install Win10 from scratch with that CPU change. My VM wanted to boot loop and repair with that CPU change.
But the...
Can you try these changes to your VM conf file, add args line and adjust CPU type to Broadwell:
args: -cpu Broadwell,vmx=on
cpu: Broadwell
PS I'm not qualified to answer your question concerning the kvm64 or other CPU types
So I'm not seeing the sluggishness you are reporting in the Windows HyperV "host" VM or the nested guest VM inside it. My test is on a PVE 7.4-13 node.
I just enabled cpu type host and have no flags set. My VM boots fine and HyperV is now installed. I will install a VM in HyperV to verify.
I think something else is amiss that is causing your VM to be so slow. My CPUs are v2 of the same E5 series.
What is your problem in HyperV?
What flags are you using? I haven't had the same problems since my early reply when we discussed using the other OS Type. See the first page of that thread.
Have you tried playing around with the CPU type to match your physical CPU type? For example Ive had success with host and Penryn...
I experienced some "anticheat" VM detection in Lost Ark that could only be resolved by emulating the virtual hardware components as physical ones. Such as CPU Host, LSI SCSI Controller, Intel network adapter.
If you are expecting low latency through the noVNC console then you will be...
Thanks @dcsapak that did clarify my misunderstanding. I was not aware lxd involved vms. I had believed lxd was a ubuntu flavor of lxc and my lxc experience is nearly entirely from use with proxmox.
I think this is technically possible as loom shared in his reply above. With some exclusions...
I wish to attach my mediated devices to lxc containers.
Since PVE uses it's own tooling pct are there equivalent commands to query mdev devices like lxc info --resources that would display the information I need?
Example:
~$ lxc info --resources
GPUs:
Card 0:
NUMA node: 0
Vendor...
I briefly attempted to patch the DKMS driver to support higher kernel versions. I made more progress than my original post...
I made some modifications that was said to work with Proxmox and kernel 5.10.
You can have a look at what I've done at MLNX_OFED 4.9-4.1.7.0 LTS for Debian 11
I could...
I am trying to build OFED drivers from source for my ConnectX3 Pro cards so that I can make one port 40/56 Gb Inifiniband and the second port as 40 Gb Ethernet.
This configuration requires OFED drivers as far as I understand. mlx4_core and mlx4_en wont work simultaneously that I am aware of -...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.