Search results

  1. G

    pvestatd - NFS mount "storage is not online"

    Just to make sure I understand, there is a different RPC made by pvestatd that is waiting for response that is timing out after 10 seconds causing the `pvestatd` message in syslog? OK my script returned an instance timestamp for me to try to correlate. Run 234 + Thu 24 Nov 2022 03:33:03 AM...
  2. G

    cpu affinity in 7.3

    thank you for help confirming :) I'll try CPU affinity settings to "12,13,14,15" and watch htop on proxmox to see if only those cores are maxed out and keep an eye on power consumption too
  3. G

    pvestatd - NFS mount "storage is not online"

    thanks for the pointer. I exclusively use NFS version 4.2 - taking a hint from the script you shared I wrote an endless loop script. #!/bin/bash # Basic script to run on an endless loop on a NFS client # idea is to catch failure in rpcinfo and log timestamp. counter=1 while : do echo "Run...
  4. G

    pvestatd - NFS mount "storage is not online"

    I'm trying to debug a condition where it seems that using NFS v4.2 there is a brief period of connection timeout between proxmox and my NFS server. Looks like `pvestatd` monitors storage mounts and prints some useful messages - can these be made more verbose? Nov 23 15:43:16 centrix...
  5. G

    cpu affinity in 7.3

    Not to thread hijack but related question. "CPU Affinity" under Processors setting of the VM does not seem to allow me to explicitly select which CPU cores I want mapped to a VM... Can you add this feature please? Rationale: Intel's latest Raptor Lake/Alder Lake processors have "performance"...
  6. G

    [SOLVED] NVME Passthrough : Issues blacklisting specific device (can't unbind from host)

    oh wow, this is it! thank you so much. 02:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E16 PCIe4 NVMe Controller [1987:5016] (rev 01) Subsystem: Phison Electronics Corporation E16 PCIe4 NVMe Controller [1987:5016] Kernel driver in use: nvme Kernel...
  7. G

    [SOLVED] NVME Passthrough : Issues blacklisting specific device (can't unbind from host)

    I'm trying to permanently passthrough a Samsung 980 NVME m.2 disk to VMs. The problem is that my host node continues to load this device with 'nvme' driver instead of vfio for passthrough. I have followed all of the documentation and recommended steps; yet I can't seem to unbind this specific...
  8. G

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    This is perfect timing. I had a new Intel N6000 processor that wasn't working on kernel 5.15 - I was using https://github.com/fabianishere/pve-edge-kernel to fill in the gap to get proper hardware working.
  9. G

    Nested Virtualization stopped working [PVE 6.1-5]

    This fixed my issue on my Intel Alder Lake proxmox host. Thanks!
  10. G

    [PATCH] add override_for_missing_acs_capabilities.patch

    I was following the WIKI (https://pve.proxmox.com/wiki/Pci_passthrough) and it wasn't working. It wasn't until I found this comment that with 'multifunction' my AMD Ryzen 3700X finally had the IOMMU groups fixed. Thanks!
  11. G

    [SOLVED] Manually modifying qm.conf file from 'virtio-scsi-pci' to 'virtio-scsi-single' crashes PVE networking

    Edit: this system crash was due to IOMMU isolation being broken on my AMD cpu requiring ` pcie_acs_override=downstream,multifunction` to fix it. The system was hanging due to shared group with a core controller for the PCI bus on the board for a pass thru device.
  12. G

    Proxmox ZFS over ISCSI and TrueNAS

    Ok it looks like my web browser had an old javascript cache because that option wasn't there until I cleared cache and re-loaded. Reconfigured everything now with the API option and it seems to be working! Thanks for the protip.
  13. G

    Proxmox ZFS over ISCSI and TrueNAS

    I am using TrueNAS Scale and installed the https://github.com/TheGrandWazoo/freenas-proxmox on my PVE 7.2.7 install and while "it seems" to work you can't delete disks created via the workflow: I don't know if there's a fix but certainly not truly working out of the box, at least not for...
  14. G

    proxmox 7.0 sdn beta test

    Forgive my total noob question, but would this allow me to connect 4 nodes separated in 2 locations over WAN? If so what kind of tunnel or setup do you recommend? I have: - 1 datacenter with private 192.168.0.1 (mgmt) and a VLAN that has public routed IPs - 1 home proxmox setup behind NAT and...
  15. G

    Improving I/O on Windows VMs via ZIL/LOG ZFS SSDs to my existing pool

    Looking for some thoughts and feedback before I make any changes to my system. At home I have proxmox with several VMs, some of them are windows for my homelab testing needs, there is often I/O delays and slowness when updating / installing programs. I have a pair of SSDs on the system already...
  16. G

    [SOLVED] LXC container :: How to enable TCP BBR?

    thank you! this worked great. I guess I as missing the rebooting the containers. I had wrongly assumed I had to change the settings on each LXC container but looks like it inherits the sysctl settings from the PVE host :)
  17. G

    [SOLVED] LXC container :: How to enable TCP BBR?

    My promox server has TCP BBR enabled on the host, I was looking to ensure its enabled in my LXC containers but haven't found a good guide for doing that... has anyone tried / done this on a LXC container within the host...
  18. G

    Issues running wireguard inside LXC container? Host already has kernel module wireguard

    I followed this guide to install the necessary kernel modules into my PVE host, the module seems to be loaded and "modprobe wiregard" returns no errors. On LXC config (from another post recommended to map /lib/modules folder as bind mount. Still doesn't see the files? Trying to find modules...
  19. G

    Upgrade from 5.4 to 6.1 - Windows KVM guests slow

    Hi, I was successful in upgrading my system from 5.4 to 6.1 today without any major issues. However, after a reboot post-upgrade the Windows KVM guests seem to be slower than on the previous version. The KVM guests have the drivers installed from virtio-win-0.1.173.iso - which are believe are...