Hi Folks - I will start by saying that this is more of a "want" than a "need" as this is my home lab... but has become somewhat of a quest...
I've recently switched from ESXi 7 to Proxmox 7.2 partly due to VMware being purchased by Broadcom and have doubts about the "free" version of ESXi continuing. Anyway, with ESXi I had my HP DL380 Gen9 (another reason is hardware support "disappearing" from ESXi patches over time) directly connected to my Synology RS1221+ using iSCSI with redundant connections. The DL380 has the HP FlexFabric 10Gb 2-port 554FLR-SFP (Emulex Corporation OneConnect 10Gb NIC) and the RS1221+ has the Synology E10G21-F2 10GbE dual SFP+ Port adaptor. Dual DAC cables directly connect them (no switch). The MTU was set at 9000 on both ends (including both physical NICs and the bond in Proxmox).
Using the 10GbE ports, I set up the Synology using Adaptive Load Balancing and Proxmox using
I went back to using a single 10GbE NIC on Proxmox and the NAS and everything is fine.
I'm sure I'm missing something simple here but Google University and YouTube haven't been any help here.
Suggestions anyone?
Thanks!
I've recently switched from ESXi 7 to Proxmox 7.2 partly due to VMware being purchased by Broadcom and have doubts about the "free" version of ESXi continuing. Anyway, with ESXi I had my HP DL380 Gen9 (another reason is hardware support "disappearing" from ESXi patches over time) directly connected to my Synology RS1221+ using iSCSI with redundant connections. The DL380 has the HP FlexFabric 10Gb 2-port 554FLR-SFP (Emulex Corporation OneConnect 10Gb NIC) and the RS1221+ has the Synology E10G21-F2 10GbE dual SFP+ Port adaptor. Dual DAC cables directly connect them (no switch). The MTU was set at 9000 on both ends (including both physical NICs and the bond in Proxmox).
Using the 10GbE ports, I set up the Synology using Adaptive Load Balancing and Proxmox using
balance-rr
. I had no issues setting up the iSCSI connection and creating the LVM volume. I then created a test VM (Ubuntu 20.04.4 LTS) with no issues. As part of my testing (education?) I then backed up that VM, destroyed it and started a restore. During the restore I started getting storage timeouts. Pings between Proxmox and the NAS seemed to be "skipping" pings as well; e.g.:
Code:
64 bytes from 172.16.1.1: icmp_seq=1 ttl=64 time=0.148 ms
64 bytes from 172.16.1.1: icmp_seq=3 ttl=64 time=0.131 ms
64 bytes from 172.16.1.1: icmp_seq=5 ttl=64 time=0.140 ms
I went back to using a single 10GbE NIC on Proxmox and the NAS and everything is fine.
I'm sure I'm missing something simple here but Google University and YouTube haven't been any help here.
Suggestions anyone?
Thanks!