Anything new/different here, 2 years and 2 major versions later? I'd also like to know if putting my 3 PVE 9 nodes behind a reverse proxy so I can access e.g. "proxmox.mydomain.lan" without worrying about which node might be up/down or rebooting etc...
So far so good, a few hours in
3-node cluster running PVE 9.0.11: (2x) Intel 285H + (1x) Intel 225H
Also running https://github.com/strongtz/i915-sriov-dkms for iGPU passthru, working...
Got it, thanks @patch Yes I updated to 6.14.8-2 just now and vfio is sort of working again for me with i915-sriov-dkms driver 2025.07.22 But, my Win11 vm isn't getting accelerated graphics anymore for some reason. But, the VFs are there
root@pve01:~# { dmesg | egrep 'VFs?$'; lspci | grep VGA; }...
Ah actually it does work, I just was editing the files on the wrong node..... doh!
This is all that is needed (I saved it as /etc/dnsmasq.d/dhcpnat/searchdomain.conf on all nodes...)
dhcp-option=tag:dhcpnat-10.0.0.0-24,option:domain-name,lab01...
I came looking for the same. Currently the VMs attached to my Simple dhcpsnat zone do not receive a default search domain from dnsmasq. I would like to supply something like "lab01" or "internal" etc. Is it possible?
Seems like we should be able to hand-edit the config files at e.g...
I use Tailscale to generate valid TLS certs so I can connect with HTTPS + DNS name without warning from anywhere in my tailnet.
So, the PBS server has a valid CA trusted cert, thus I removed the fingerprint from my storage config /etc/pve/storage.cfg
Now, I get the 500 Can't connect to...
Follow-up to this:
I marked a throwaway VM's SCSI disk as "no backup":
But, vzdump still wants to back up its TPM and EFI storage disks:
Problem is, there is no "Edit" available on those disks (grayed out):
I tried manually editing the /etc/pve/nodes/pve02/qemu-server/301.conf file...
Thank you again. I'm not using openvswitch. And I have already set "stable" names for my NICs (en0, en1 etc) so they match across all nodes.
For now, I took the jump and installed the 6.14 kernel. Will keep posted if there is any effect either way.
@TErxleben not sure I understand the question, but, what I noticed was that the entries in the kernel log for the interfaces that were entering/exiting promiscuous mode only corresponded to the VMs that were powered off at the time the backups started (see OP). For example,
tap100i0, fwpr100p0...
My backups (to PBS) are scheduled to run every night at 1:30am. For the last few nights, I get a warning from my Unifi switch saying that "multiple devices are using IP address 192.168.20.51..." which is the IP assigned to my bare metal PVE node. Nothing special about the node, it's a Mini NUC...