I currently have a couple of Intel NUCs running ESXi 7.0U3s standalone and am looking at moving over to a 3-node Proxmox cluster by utilizing a 3rd NUC.
This is a home lab environment and I am not looking to use shared storage for VMs nor replicated storage between nodes.
This is a rough sketch of my plan:

Ths disk/partition/LVM structure would be identical on the 3 nodes (I have set up the 3rd NUC with Proxmox 8.4.3 and a next-next-finish install) with the addition of a SATA disk on the larger NUC (the other 2 only support 1 NVMe).
My idea is that the SATA disk is used for ISOs, container templates and potentially VM backups (before major upgrades), so the other 2 nodes would be able to access it through CIFS (from what I have read this is easier for a novice than NFS or Ceph).
Each node will have a 10Gbps connection to a backbone used for cluster traffic and 1 specific container (Emby Server) to access 2 QNAP NAS devices - the 1Gbps interface will be for accessing services hosted by the VMs/containers (domain controller, CA, Pi-hole, postfix, gitlab, that kind of thing).
I realize that without shared or replicated storage I'm not getting the most benefit of a PVE cluster, but using a NAS or replicated ZFS feels unecessarily complicated at present and I don't want to shred the NVMe disks (or SSDs on the NAS devices) or create a single point of failure.
Having all 3 hosts on a single interface will be an upgrade from the separate ESXi's I have right now, and I am used to placing my resources manually.
Are there any glaring mistakes in my plan, or any other suggestions for how it may be configured better?
This is a home lab environment and I am not looking to use shared storage for VMs nor replicated storage between nodes.
This is a rough sketch of my plan:

Ths disk/partition/LVM structure would be identical on the 3 nodes (I have set up the 3rd NUC with Proxmox 8.4.3 and a next-next-finish install) with the addition of a SATA disk on the larger NUC (the other 2 only support 1 NVMe).
My idea is that the SATA disk is used for ISOs, container templates and potentially VM backups (before major upgrades), so the other 2 nodes would be able to access it through CIFS (from what I have read this is easier for a novice than NFS or Ceph).
Each node will have a 10Gbps connection to a backbone used for cluster traffic and 1 specific container (Emby Server) to access 2 QNAP NAS devices - the 1Gbps interface will be for accessing services hosted by the VMs/containers (domain controller, CA, Pi-hole, postfix, gitlab, that kind of thing).
I realize that without shared or replicated storage I'm not getting the most benefit of a PVE cluster, but using a NAS or replicated ZFS feels unecessarily complicated at present and I don't want to shred the NVMe disks (or SSDs on the NAS devices) or create a single point of failure.
Having all 3 hosts on a single interface will be an upgrade from the separate ESXi's I have right now, and I am used to placing my resources manually.
Are there any glaring mistakes in my plan, or any other suggestions for how it may be configured better?