Search results

  1. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    When we look across the web at KVM running windows guests, people all over have problems, mostly relative to latency issues, in most cases pinning cores and turning off power saving measures from bios to kernel etc seem to be the goto routes to improvement. Then there's also disabling...
  2. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    The house is wired up for 10G.. I use 100G with a ToR switch for interlink between all the nodes in the rack. Didn't report it as a bug because it doesn't appear to be a bug, it was simply disabled in the config. Maybe because upstream it is still considered 'experimental', but where I'm...
  3. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    Good plan @AlfonsL :cool: Just for you, I thought I'd give it a shot and installed Windows 10 as an EFI install on my Proxmox testbed server (DL380 G9)... gave it a qcow storage file stored on my HyperV SMB server across a 100G Chelsio T6 RDMA link, chucked in a single-slot nvidia 1650...
  4. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    There are a few discussions out there... here's one from a couple of years ago... but it will require you to recompile the kernel with certain flags set. https://www.reddit.com/r/VFIO/comments/84181j/how_i_achieved_under_1ms_dpc_and_under_025ms_isr/ However it doesnt go into details on how to...
  5. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    No problem, I do have a gazillion tweaks myself, and honestly I lost track of half of them!... but nowadays I'm trying to keep all my tweaks in a script so that I can easily take it across to other identical server setups. I also have a Hyper-V environment, and that is rock solid, been running...
  6. D

    Proxmox VE 6.1 + VM con W10 + VirtIO

    I think if you sign up for subscriptiong/paid-support, then the techies will come to your rescue. Otherwise the best thing you can do is browse these forums and google (which I'm guessing you already are based on the links you mentioned). Performance totally depends on hardware and the...
  7. D

    Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

    So now that I've confirmed the performance advantages of SMB Direct, what I would like to do is enable RDMA in the Proxmox gui itself so that I can at least keep to the codebase and just rely on patch-files instead, any idea where the code in proxmox is where it creates the cifs mounts? Update...
  8. D

    Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

    WOW! Cannot believe how much of a performance increase this has now brought to my setup. Been comparing ceph to zfs to physical disks to locally hosted qcows and raws etc... and it seems that using a windows barebone hyper-v host with a CIFS share via SMB Direct with RDMA gives a huge increase...
  9. D

    Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

    Ok, I managed to get a Kernel building and working. Could some nice gent/lady point me to the direction of the appropriate file where I can insert the kernel config: CONFIG_CIFS_SMB_DIRECT=y Thank you!
  10. D

    Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

    Hi, I'm homelab'ing Proxmox, and thought I'd manually create a CIFS share, works fine, but then I add the rdma parameter in the mount.cifs command, and it fails to create the share. I checked dmesg, and I see 'CIFS VFS: CONFIG_CIFS_SMB_DIRECT is not enabled'. So it appears the CIFS module was...