Yes it was the same flag 'MSISupported'.
Yes I also believe it is something to do with the latest Proxmox/KVM, only the guru devs will be able to shed some light onto this as and when they get a chance to have a closer look at this.
My setup is a scsi-passthrough-single controller plus a...
So basically after I updated the nvidia drivers to 446 (subsquently also went back to 445 and same issue!), windows crashes with the infamous "VIDEO TDR FAILURE" BSOD)...
Now, I managed to get things going again by booting into the win10 guest in safe-mode and enabling "MSI MODE" for the GPU...
When we look across the web at KVM running windows guests, people all over have problems, mostly relative to latency issues, in most cases pinning cores and turning off power saving measures from bios to kernel etc seem to be the goto routes to improvement. Then there's also disabling...
The house is wired up for 10G.. I use 100G with a ToR switch for interlink between all the nodes in the rack.
Didn't report it as a bug because it doesn't appear to be a bug, it was simply disabled in the config. Maybe because upstream it is still considered 'experimental', but where I'm...
Good plan @AlfonsL :cool:
Just for you, I thought I'd give it a shot and installed Windows 10 as an EFI install on my Proxmox testbed server (DL380 G9)... gave it a qcow storage file stored on my HyperV SMB server across a 100G Chelsio T6 RDMA link, chucked in a single-slot nvidia 1650...
There are a few discussions out there... here's one from a couple of years ago... but it will require you to recompile the kernel with certain flags set.
https://www.reddit.com/r/VFIO/comments/84181j/how_i_achieved_under_1ms_dpc_and_under_025ms_isr/
However it doesnt go into details on how to...
No problem, I do have a gazillion tweaks myself, and honestly I lost track of half of them!... but nowadays I'm trying to keep all my tweaks in a script so that I can easily take it across to other identical server setups. I also have a Hyper-V environment, and that is rock solid, been running...
I think if you sign up for subscriptiong/paid-support, then the techies will come to your rescue.
Otherwise the best thing you can do is browse these forums and google (which I'm guessing you already are based on the links you mentioned).
Performance totally depends on hardware and the...
So now that I've confirmed the performance advantages of SMB Direct, what I would like to do is enable RDMA in the Proxmox gui itself so that I can at least keep to the codebase and just rely on patch-files instead, any idea where the code in proxmox is where it creates the cifs mounts?
Update...
WOW! Cannot believe how much of a performance increase this has now brought to my setup.
Been comparing ceph to zfs to physical disks to locally hosted qcows and raws etc... and it seems that using a windows barebone hyper-v host with a CIFS share via SMB Direct with RDMA gives a huge increase...
Ok, I managed to get a Kernel building and working.
Could some nice gent/lady point me to the direction of the appropriate file where I can insert the kernel config:
CONFIG_CIFS_SMB_DIRECT=y
Thank you!
Hi, I'm homelab'ing Proxmox, and thought I'd manually create a CIFS share, works fine, but then I add the rdma parameter in the mount.cifs command, and it fails to create the share. I checked dmesg, and I see 'CIFS VFS: CONFIG_CIFS_SMB_DIRECT is not enabled'.
So it appears the CIFS module was...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.