Search results

  1. R

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    I originally had a bunch of lxcs a while back but went with a single VM and run docker inside the VM. Maintaining the apps with pulling new images via docker was a lot easier than whatever installation file/script was needed for each individual app. What's nice is in my docker compose, I have a...
  2. R

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    I never tried merging or dealing with LXC, felt too fragile. I like the ability to live migrate things. Now that we can do that with these mdevs (I tried it and it is working), I might try passing it to my VM that runs emby and see if I can get transcoding working.
  3. R

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    It seemed like it was working with hw spoofing. I was using the codeprojectai with cuda12 for image processing and the times matched what they were back when I was using the older v16 drivers. Either way, still good to figure out how to bypass everything.
  4. R

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    Indeed, that is what I was missing. I then restarted the nvidia services but couldn't get my VM to start. I just restarted the host and when I went to start the VM it complained about my mdev selection. I didn't realize it now presents the A5500 profiles to MDEV (if I read more on the unlocking...
  5. R

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    I've spent some time trying to figure this out, but I might be missing something. I updated my vgpu_unlock-rs to be GreenDam's, did the cargo build step and rebooted. Installed the 17.5 host drivers and copied the vgpuConfig.xml over. I am able to see my P4 with nvidia-smi and I can see the...
  6. R

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    Did you also need to copy the vgpuConfig.xml over from the v16 drivers? I'm going thru the process now to try out v17 with everything.
  7. R

    [SOLVED] vGPU just stopped working randomly (solution includes 6.14, pascal fixes for 17.5, changing mock p4 to A5500 thanks to GreenDam )

    Slight threadjack: What version of what driver are you running now? With kernel 6.14 and patched 16.9 drivers, when I start up a VM that has my P4 passed thru (not using the A5500 patch yet) I have this in the host: Note: I do not get this with kernel 6.11 on the host (just 6.14) In...
  8. R

    Tesla P4 | Cannot get drivers installed at all!!!

    I believe you're correct. I never tried merging, only saw it in passing. I've only ever used my P4 passed into a VM and not with any LXC containers Thanks for the note about the P4 to A5500 patch. I need to check that out.
  9. R

    Tesla P4 | Cannot get drivers installed at all!!!

    There are issues with kernels 6.8 and newer are not supported by the nvidia provided drivers. You have to patch them to get them to build correctly. That begin said, I don't think the patches exist for the v18 drivers, just the v16 and v17. I'm not sure I'm allowed to link for it, but search...
  10. R

    The Reasons for poor performance of Windows when the CPU type is host

    This is interesting. I've been using host and stumbled on this thread. Some quicks test below Host: EPYC 7302P 256GB memory, nothing running on this node but my test VM Proxmox 8.4.1 with the opt in kernel 6.14.0-2-pve VM: Windows 11 Pro for Workstations (24H2, 26100.3775) Fully updated as of...
  11. R

    Proxmox VE 8.4 released!

    I was able to migrate my Tesla P4 between 2 nodes. Takes a moment because of the constantly changing video memory. I do have some issues I'm investigating unrelated to migration. Using the patched driver on the opt in kernel, I see this on my host: WARNING: CPU: 2 PID: 631019 at...
  12. R

    Proxmox VE 8.4 released!

    I just did an edit to my post with a screen capture showing that option is disabled. I even removed the mapping and tried to add it again and it wasn't enabled. Maybe these old cards aren't supported, or I'm missing some other option to enable elsewhere? Edit: Yup, it was on the top level mapping:
  13. R

    Proxmox VE 8.4 released!

    Is there something special to get the live migration of mdev devices working? (problem with mapping)? Here is my entry in the conf file: hostpci0: mapping=TeslaP4,mdev=nvidia-65,pcie=1 When I attempt to migration I get a popup that says: Cannot migrate running VM with mapped resources...
  14. R

    Sharing ZFS dataset via SMB on Proxmox using CT turnkey fileserver

    All my users are actually me. Just for different things. :)
  15. R

    Sharing ZFS dataset via SMB on Proxmox using CT turnkey fileserver

    Yup. I then share those mounts out with smb and NFS. But anything inside the container to serve the mount, like a turnkey or the like would probably be fine. I just liked the look of the cockpit+45drive plugin, lol. Then again, I only have about 6 users and I don't do anything fancy with smb...
  16. R

    Sharing ZFS dataset via SMB on Proxmox using CT turnkey fileserver

    Doesn't look like it. I used Debian, then cockpit with the 45drives cockpit-file-sharing plugin
  17. R

    Poor transfer rate across virtual machines across nodes

    To expand on what LnxBil said, you need to run some iperf3 tests between the hosts. You could have a bad nic, a bad cable, or a bad switch/port. Once you know, host to host, that you can reach 1gb, then go from there.
  18. R

    I have three nodes, they always show up pve-a, pve-c, pve-b, how to get those alphabetized?

    Your 3rd node is P"X"E, not P"V"E, so they are correctly sorted alphabetically right now. You'd need to rename the node, but that might be a little of a hassle with SSH keys and stuff (off the top of my head).
  19. R

    Mini PC

    32GB of memory might be a little light if you plan on running all those at the same time. I say might because my laptop right now only has 8GB (Win 11) but it always seems to be maxed on usage, but still runs fine. It is going to depend greatly on what you plan to do with the VMs. As Pifouney...
  20. R

    Why Am i getting a "?" next to the VM in Web GUI

    "2 node cluster" Is that screen capture looking at the "Data Center" list or on one of the nodes? Do you have a Pi or something as a qdevice for quorum? Are those with a ? on one node, and the others are on the other node? When you see the ? and they are always on the same node as the other ...