Search results

  1. D

    Kernel source, Documentation folder?

    @t.lamprecht - thank you for that info, is there a way for the make command to take both paths into consideration? as things are split out like this? at the moment if I use the submodules kernel source path I get "No utsrelease" failure.
  2. D

    Kernel source, Documentation folder?

    Ah not to worry, I just created a blank file in that path to get by instead.
  3. D

    Kernel source, Documentation folder?

    Hi, just attempting to compile a driver, but its looking for the Documentation folder in the kernel source root path, this doesnt appear to be present in the PVE source I downloaded, any suggestions?
  4. D

    VF (Virtual Function Passthrough) standard practice?

    With GPUs the norm is to exclude the pcie card from being initialiased/drivers loaded in the host so that it can be passed through to VMs without issue. Does the above also apply to SRIOV VFs? Although currently I don't and passing them through to VMs even after being...
  5. D

    Any chance of Proxmox Chelsio inbox drivers coming with all bells and whistles as standard?

    [ 18.028094] iw_cxgb4: Chelsio T4/T5 RDMA Driver - version 0.1 [ 20.944513] iw_cxgb4: 0000:88:00.4: Up [ 20.944516] iw_cxgb4: 0000:88:00.4: On-Chip Queues not supported on this device ...that appears to be quite an old version?
  6. D

    Proxmox 6.2 - nvidia gpu passthrough crashes after nvidia driver update

    From what I gather from QEMU discussions, there's been a lot of work going on in that department, although outside of the scope of Proxmox devs, but all in all there will be knock-on discoveries, new configuration requirements etc, and much of this information doesn't get out there until people...
  7. D

    Proxmox 6.2 - nvidia gpu passthrough crashes after nvidia driver update

    Yes it was the same flag 'MSISupported'. Yes I also believe it is something to do with the latest Proxmox/KVM, only the guru devs will be able to shed some light onto this as and when they get a chance to have a closer look at this. My setup is a scsi-passthrough-single controller plus a...
  8. D

    Proxmox 6.2 - nvidia gpu passthrough crashes after nvidia driver update

    So basically after I updated the nvidia drivers to 446 (subsquently also went back to 445 and same issue!), windows crashes with the infamous "VIDEO TDR FAILURE" BSOD)... Now, I managed to get things going again by booting into the win10 guest in safe-mode and enabling "MSI MODE" for the GPU...
  9. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    When we look across the web at KVM running windows guests, people all over have problems, mostly relative to latency issues, in most cases pinning cores and turning off power saving measures from bios to kernel etc seem to be the goto routes to improvement. Then there's also disabling...
  10. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    The house is wired up for 10G.. I use 100G with a ToR switch for interlink between all the nodes in the rack. Didn't report it as a bug because it doesn't appear to be a bug, it was simply disabled in the config. Maybe because upstream it is still considered 'experimental', but where I'm...
  11. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    Good plan @AlfonsL :cool: Just for you, I thought I'd give it a shot and installed Windows 10 as an EFI install on my Proxmox testbed server (DL380 G9)... gave it a qcow storage file stored on my HyperV SMB server across a 100G Chelsio T6 RDMA link, chucked in a single-slot nvidia 1650...
  12. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    There are a few discussions out there... here's one from a couple of years ago... but it will require you to recompile the kernel with certain flags set. https://www.reddit.com/r/VFIO/comments/84181j/how_i_achieved_under_1ms_dpc_and_under_025ms_isr/ However it doesnt go into details on how to...
  13. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    No problem, I do have a gazillion tweaks myself, and honestly I lost track of half of them!... but nowadays I'm trying to keep all my tweaks in a script so that I can easily take it across to other identical server setups. I also have a Hyper-V environment, and that is rock solid, been running...
  14. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    I think if you sign up for subscriptiong/paid-support, then the techies will come to your rescue. Otherwise the best thing you can do is browse these forums and google (which I'm guessing you already are based on the links you mentioned). Performance totally depends on hardware and the...
  15. D

    Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

    So now that I've confirmed the performance advantages of SMB Direct, what I would like to do is enable RDMA in the Proxmox gui itself so that I can at least keep to the codebase and just rely on patch-files instead, any idea where the code in proxmox is where it creates the cifs mounts? Update...
  16. D

    Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

    WOW! Cannot believe how much of a performance increase this has now brought to my setup. Been comparing ceph to zfs to physical disks to locally hosted qcows and raws etc... and it seems that using a windows barebone hyper-v host with a CIFS share via SMB Direct with RDMA gives a huge increase...
  17. D

    Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

    Ok, I managed to get a Kernel building and working. Could some nice gent/lady point me to the direction of the appropriate file where I can insert the kernel config: CONFIG_CIFS_SMB_DIRECT=y Thank you!
  18. D

    Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

    Hi, I'm homelab'ing Proxmox, and thought I'd manually create a CIFS share, works fine, but then I add the rdma parameter in the mount.cifs command, and it fails to create the share. I checked dmesg, and I see 'CIFS VFS: CONFIG_CIFS_SMB_DIRECT is not enabled'. So it appears the CIFS module was...