ARP && ND is filtered by default, until you enable the option "Disable ARP ND Suppression".
at layer2, the vnet address is not used at all. The bridge/vnet are forwarding through the vxlan tunnels between nodes in same vxan directly. (with the...
They are not going to works fine with zfs, sorry.
you really need enteprise drive for zfs. (at minimum for the slog devices).
just look at this thread with same disk (it was with ceph, but it's the some zfs, you need drive with fast sync...
note that you can use HA with local zfs replicated between node. (but you can loss the last non synced datas snapshot).
you could have better resilience than a single NAS (or do you have 2 truenas nodes with a shared array ?)
About your...
note that you can configure the token permissions to have only audit access for monitoring, without backup/restore permissions. (I'm not sure about default permissions when pdm is generating the pbs token)
7.0 is optin and you can still boot on previous kernel in grub. And only availaible in non subscription repo. So, instead complaining, please report your nic model. Is it a pbs vm on proxmox ve ? Or on vmware ?
move your cluster_network ip on the 100Gb too. (cluster_network is used for osd replication when defined, so it's limiting your write speed)
auto nic4_100G00
iface nic4_100G00 inet static
address 10.180.194.211/24
address...
the firewall checkbox on nic is enable a fwbr bridge to get iptables work, but I think it doesn't do it with nftables (because it's not needed), and maybe it's bypass nftables rules when present.
Maybe a note should be added in doc, that...
it's a simple debian package, so if you are running debian on your machine yes.
(you could install it on pbs too as alternative).
they are also this alternative opensource project : https://pegaprox.com/ , it's python based web management...
you need 3 nodes for a cluster. with 2 nodes, if 1 node goes down, the other node will be readonly. (or install corosync qdevice on pbs for the third corosync vote).
you can use proxmox datacenter manager if you want to manage in a central way...
so, no backup :/
you can try to recreate the vm without disk from the gui, with same id, then write the disk config in the vm config manually.
(/etc/pve/nodes/<node>/qemu-server/<vmid>.conf) scsi0:.....
not sure about others files for the...