Recent content by luckman212

  1. L

    if reboot is triggered pve node goes away too fast before ha migration is finished

    Just adding a data point: I also experience this error from time to time on my 9.0.11 cluster
  2. L

    PROXMOX Cluster GUI behind HA Layer 4 Loadbalancer and Reverse Proxy

    Anything new/different here, 2 years and 2 major versions later? I'd also like to know if putting my 3 PVE 9 nodes behind a reverse proxy so I can access e.g. "proxmox.mydomain.lan" without worrying about which node might be up/down or rebooting etc...
  3. L

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    So far so good, a few hours in 3-node cluster running PVE 9.0.11: (2x) Intel 285H + (1x) Intel 225H Also running https://github.com/strongtz/i915-sriov-dkms for iGPU passthru, working...
  4. L

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    Got it, thanks @patch Yes I updated to 6.14.8-2 just now and vfio is sort of working again for me with i915-sriov-dkms driver 2025.07.22 But, my Win11 vm isn't getting accelerated graphics anymore for some reason. But, the VFs are there root@pve01:~# { dmesg | egrep 'VFs?$'; lspci | grep VGA; }...
  5. L

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    Is proxmox-kernel 6.14.8-2~bpo12+1 still only available in the pvetest repo? Any idea when it will flip over to pve-no-subscription?
  6. L

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    Sorry if this is the wrong place to ask but could you point me in the right direction for how to generate/switch to the self-compiled 6.14.8 kernel?
  7. L

    Supported parameters in subnets.cfg?

    @floh8 I found a very similar request already, so I added this suggestion to it: https://bugzilla.proxmox.com/show_bug.cgi?id=6014#c1
  8. L

    Supported parameters in subnets.cfg?

    Ah actually it does work, I just was editing the files on the wrong node..... doh! This is all that is needed (I saved it as /etc/dnsmasq.d/dhcpnat/searchdomain.conf on all nodes...) dhcp-option=tag:dhcpnat-10.0.0.0-24,option:domain-name,lab01...
  9. L

    Supported parameters in subnets.cfg?

    I came looking for the same. Currently the VMs attached to my Simple dhcpsnat zone do not receive a default search domain from dnsmasq. I would like to supply something like "lab01" or "internal" etc. Is it possible? Seems like we should be able to hand-edit the config files at e.g...
  10. L

    Exclude entire VM? Or have to mark each disk?

    Okay, thanks. That works! still, it would be cool if there was a flag we could set in the VM's conf file to mark it as excluded.
  11. L

    PBS: error fetching datastores - fingerprint 'xx' not verified, abort! (500)

    I use Tailscale to generate valid TLS certs so I can connect with HTTPS + DNS name without warning from anywhere in my tailnet. So, the PBS server has a valid CA trusted cert, thus I removed the fingerprint from my storage config /etc/pve/storage.cfg Now, I get the 500 Can't connect to...
  12. L

    Exclude entire VM? Or have to mark each disk?

    Follow-up to this: I marked a throwaway VM's SCSI disk as "no backup": But, vzdump still wants to back up its TPM and EFI storage disks: Problem is, there is no "Edit" available on those disks (grayed out): I tried manually editing the /etc/pve/nodes/pve02/qemu-server/301.conf file...
  13. L

    vzdump causing interface to enter promiscuous mode, triggering duplicate IP address warnings from Ethernet switch

    Thank you again. I'm not using openvswitch. And I have already set "stable" names for my NICs (en0, en1 etc) so they match across all nodes. For now, I took the jump and installed the 6.14 kernel. Will keep posted if there is any effect either way.
  14. L

    vzdump causing interface to enter promiscuous mode, triggering duplicate IP address warnings from Ethernet switch

    @TErxleben not sure I understand the question, but, what I noticed was that the entries in the kernel log for the interfaces that were entering/exiting promiscuous mode only corresponded to the VMs that were powered off at the time the backups started (see OP). For example, tap100i0, fwpr100p0...