PVE/ssh by default listen on all interfaces, so it's possible to connect via mgmt, corosync, storage etc IPs.
If you don't need very high performance (100 Gbps networks), i will use 9000 only on limited scope, for example, dedicated vmbr/vlan for...
Depends on nic utilization, but we use the switch way for our 3 nodes cluster:
1x lacp (2 ports) with vlans for management (= ceph public), for vm, for corosync, etc
2x lacp (2 ports) with vlans for corosync, for ceph storage
Mesh is for small...
I am not using such config variant, so i can theoretize it looks ok.
But i am using vlans everywhere and never assign ip to bridge, but using subinterface every time.
Anyway, PVE can access multiple networks without fw/router.
For nfs access you...
PMG is mainly for mailserver to mailserver communication. If you are trying sending mail from non mailservers, send those mails to exchange first.
Or
https://www.postfix.org/SMTPD_ACCESS_README.html#relay and test.
Disabled power saving states on the Dell? Firmware updated?
SSDs are enterprise or desktop versions?
I have feeling, you have disk problem, missing virtio drivers in Window VMs etc, but nothing concrete, no VM config/pve versions written.
Maybe...
MTU is 1500 for all servers, switches has 9216. Remotes the same location, the same switch and vlan.
Because internal CA, fingeprints (FP):
Configured for REMOTE1:8006:
from pveproxy-ssl.pem FP - BUT - it automatically detects intermediate...
Nothing spotted yet, because remotes are in production and there is nothing non-standard even on netdata (i can miss something of course).
Today 4x test moving real vm 10+200 GB, migration fails for 2nd disk around 17-19/16-17 GB position...
Created test VM1 with 10 GB and VM2 with 10+10 GB without any OS.
Tested PDM migration:
1st try: VM1 slowed to hell
2nd try: VM2 succeeded
Remotes tests:
SERVER1: if=/dev/pve/vm-104-disk-1 bs=64k status=progress of=/dev/null - skipped due...
The same test as with Alpha version. The same remotes, upgraded to PVE 9. Migration of the 2nd disk stucked, manually stopped.
Remotes: pve-manager/9.0.6/49c767b70aeb6648 (running kernel: 6.14.11-1-pve)
2025-09-19 11:28:53 remote: started...
Filebackup of the host. Especially /etc/pve for pve specific configuration, large part of it is handled by cluster, but something is node dependent (example: certificates).
Search forum at least.
Running PVE 9 on DL360/380 Gen 8-10 without problem.
You can install rsyslog, attach monitor for kernel dump, maybe there will be something not in journal.
It's standard debian anyway.
Other thing - firmware, power, etc etc etc.
Without upgrading the 2nd SAN node:
What will happen, when you shutdown the 2nd SAN node?
What will happen, when you shutdown the 1st SAN node?
I think, it's SAN support thing mainly, because with 2nd SAN node upgrading it blocks something...