Search results

  1. J

    [SOLVED] Install NVIDIA Drivers

    You said your kernel is 5.11.22-4. Why are you pointing to kernel headers for 5.4.98-1? Your kernel and kernel headers need to match. Make sure all your packages and kernels are up to date. At present, Proxmox is on 5.11.22-7-pve.
  2. J

    [SOLVED] Install NVIDIA Drivers

    What is the exact command you are running?
  3. J

    [SOLVED] Install NVIDIA Drivers

    That is strange. I have never received that message. You do have kernel headers installed, correct? apt install -y -q pve-headers Did you install any nVidia or cuda drivers from repos before? Is this a clean install of proxmox or was it converted from Debian to PVE?
  4. J

    [SOLVED] Install NVIDIA Drivers

    The current driver (which should support up through Kernel 5.13) is 495.44. Start there. Also, you don't need "--kernel-source-path" or any other flags when installing on the host. Just run ./NVIDIA-Linux-x86_64-495.44.run. You need to install the same driver on the host as in the LXC...
  5. J

    [SOLVED] Install NVIDIA Drivers

    What version of Proxmox? What kernel? What nVidia card are you using?
  6. J

    Ceph Outdated OSD's even though on 16.2.6

    The bug seems to persist in 7.0-14+1.
  7. J

    [SOLVED] vTPM for Proxmox

    No. Only if you want to passthrough a real TPM. The vTPM is entirely virtualized.
  8. J

    Ceph Outdated OSD's even though on 16.2.6

    Yes I did even though it shouldn’t be necessary. I have been up to date on 16.2.6 for months. No new Ceph packages have been released in some time.
  9. J

    Ceph Outdated OSD's even though on 16.2.6

    I have two different clusters prompting that all ODS's, monitors, managers, and MDS servers need to be updated even though they are on the newest Ceph Pacific 16.2.6.
  10. J

    Linux Kernel 5.13, ZFS 2.1 for Proxmox VE

    Warning to anyone thinking about updating to kernel 5.13 - Nvidia drivers are not yet compatible with a kernel past 5.11. I had to roll back to 5.11 because 5.13 broke hardware transcoding support. Otherwise it worked great. I am still running 5.13 on another cluster that does not require GPU's.
  11. J

    EFI and TPM removed from VM config when stopped, not when shutdown

    Have the VM's been migrated to another host and back or the nodes all restarted? I had the same problems when I updated the packages but the VM was still running under the prior qemu version. Restarting or shutting down the VM within the same node didn't help.
  12. J

    [SOLVED] One GPU multiple VMs acceleration and Plex encoding

    You can easily share an nVidia card to multiple LXC containers, but VM’s requires a Grid license as mentioned above. I have Tesla cards in each node that are shared between Plex containers, machine learning, transcoding, etc etc. It all works perfect and can even still support HA failover...
  13. J

    Ubuntu 21.10 LXC Container Won't Start

    Update - looks like /usr/share/perl5/PVE/LXC/Setup/Ubuntu.pm needs to be updated to support 21.10
  14. J

    Ubuntu 21.10 LXC Container Won't Start

    I receive the following error when starting an Ubuntu 21.10 LXC Container. task started by HA resource agent /dev/rbd8 run_buffer: 316 Script exited with status 255 lxc_init: 816 Failed to run lxc.hook.pre-start for container "118" __lxc_start: 2007 Failed to initialize container "118" TASK...
  15. J

    vTPM support - do we have guide to add the vTPM support?

    I always install Windows using SATA, then install the VirtIO drivers inside a stable Win installation. Reboot, add an additional dummy drive as VirtIO SCSI to test. If the new drive shows up and formats fine, power down, delete the new dummy drive, and change your OS drive from SATA to VirtIO...
  16. J

    vTPM support - do we have guide to add the vTPM support?

    Disregard... I need more coffee. I didn't reboot the nodes or migrate the VM's first in order to get them on the newest QEMU build. The packages were updated, but the VM's were running under the old builds.
  17. J

    vTPM support - do we have guide to add the vTPM support?

    Thank you. I already did that. I shut down all my VM's (Linux and Windows). I deleted the EFI disks. I added a TPM 2.0 and then added a new EFI Disk using pre-enrolled keys. I restarted the VM's. In the UEFI boot menu I configured the correct start disk. The VM's boot, but none of them...
  18. J

    vTPM support - do we have guide to add the vTPM support?

    "Attempt Secure Boot" is greyed out. What am I doing wrong? I can't change "Current Secure Boot State" to "Enabled".
  19. J

    vTPM support - do we have guide to add the vTPM support?

    Is there an easy way to spoof the processor while still passing through all flags in the same manner as Host? I have Intel E5-2697 v4 chips in my cluster, but MS has deemed those antiquated and incapable of running a basic OS...