I have a 1 node Proxmox setup that I primarily use for 1 Plex Linux VM. I have 2 OmniOS storage boxes with large striped RaidZ2 arrays for all the media storage. I am using 10Gb networking to access both storage boxes via ZFS over iSCSI. The drives in both of the storage machines are very similar. I have the 8TB WD Easystore Drives in one and the 10TB WD Easystore drives in the other.
The Linux VM uses local SSD storage for / partition and then I have 2 large disks from the ZFS over iSCSI connections that are mounted at /r510 and /supermicro respectfully.
The storage mounted at /r510 is significantly slower than the storage mounted at /supermicro and I am having trouble figuring out why.
The R510 zfs pool info from its storage box:
The supermicro ZFS pool info:
Pool info for supermicro (goliath):
Pool info for r510 (ringo):
Connection settings in Proxmox are identical. Disk settings are identical (discard on and cache is default-none).
Speed test directly on r510 (slower one):
Speed test directly on Supermicro (faster one):
I can get the results of the dd tests from the VM for each disk, but when I ran them earlier, the write speed for the r510 was like 72MB/sec in the VM and the write speed for the supermicro disk was around 400 MB/sec.
What could be contributing to the large difference in speed?
Additional note: the faster pool only has 8GB RAM. Slower pool has 24GB RAM.
Pics of connection info:


Thanks for any help.
The Linux VM uses local SSD storage for / partition and then I have 2 large disks from the ZFS over iSCSI connections that are mounted at /r510 and /supermicro respectfully.
The storage mounted at /r510 is significantly slower than the storage mounted at /supermicro and I am having trouble figuring out why.
The R510 zfs pool info from its storage box:
root@kylefiber:/ringo# zfs get all ringo
NAME PROPERTY VALUE SOURCE
ringo type filesystem -
ringo creation Sat Feb 5 22:06 2022 -
ringo used 1.83T -
ringo available 54.5T -
ringo referenced 192K -
ringo compressratio 1.00x -
ringo mounted yes -
ringo quota none default
ringo reservation none default
ringo recordsize 128K default
ringo mountpoint /ringo default
ringo sharenfs off default
ringo checksum on default
ringo compression lz4 local
ringo atime on default
ringo devices on default
ringo exec on default
ringo setuid on default
ringo readonly off default
ringo zoned off default
ringo snapdir hidden default
ringo aclmode discard default
ringo aclinherit restricted default
ringo createtxg 1 -
ringo canmount on default
ringo xattr on default
ringo copies 1 default
ringo version 5 -
ringo utf8only off -
ringo normalization none -
ringo casesensitivity sensitive -
ringo vscan off default
ringo nbmand off default
ringo sharesmb off default
ringo refquota none default
ringo refreservation none default
ringo guid 5882232683570146752 -
ringo primarycache all default
ringo secondarycache all default
ringo usedbysnapshots 0 -
ringo usedbydataset 192K -
ringo usedbychildren 1.83T -
ringo usedbyrefreservation 0 -
ringo logbias latency default
ringo dedup off default
ringo mlslabel none default
ringo sync standard default
ringo dnodesize legacy default
ringo refcompressratio 1.00x -
ringo written 192K -
ringo logicalused 1.84T -
ringo logicalreferenced 42.5K -
ringo filesystem_limit none default
ringo snapshot_limit none default
ringo filesystem_count none default
ringo snapshot_count none default
ringo redundant_metadata all default
ringo special_small_blocks 0 default
ringo encryption off default
ringo keylocation none default
ringo keyformat none default
ringo pbkdf2iters 0 default
The supermicro ZFS pool info:
root@datastor1:/goliath# zfs get all goliath
NAME PROPERTY VALUE SOURCE
goliath type filesystem -
goliath creation Sun Mar 17 12:51 2019 -
goliath used 68.4T -
goliath available 31.8T -
goliath referenced 188K -
goliath compressratio 1.00x -
goliath mounted yes -
goliath quota none default
goliath reservation none default
goliath recordsize 128K default
goliath mountpoint /goliath default
goliath sharenfs off default
goliath checksum on default
goliath compression lz4 local
goliath atime on default
goliath devices on default
goliath exec on default
goliath setuid on default
goliath readonly off default
goliath zoned off default
goliath snapdir hidden default
goliath aclmode discard default
goliath aclinherit restricted default
goliath createtxg 1 -
goliath canmount on default
goliath xattr on default
goliath copies 1 default
goliath version 5 -
goliath utf8only off -
goliath normalization none -
goliath casesensitivity sensitive -
goliath vscan off default
goliath nbmand off default
goliath sharesmb off default
goliath refquota none default
goliath refreservation none default
goliath guid 43795343080512498 -
goliath primarycache all default
goliath secondarycache all default
goliath usedbysnapshots 0 -
goliath usedbydataset 188K -
goliath usedbychildren 68.4T -
goliath usedbyrefreservation 0 -
goliath logbias latency default
goliath dedup off default
goliath mlslabel none default
goliath sync standard default
goliath dnodesize legacy default
goliath refcompressratio 1.00x -
goliath written 188K -
goliath logicalused 68.7T -
goliath logicalreferenced 36.5K -
goliath filesystem_limit none default
goliath snapshot_limit none default
goliath filesystem_count none default
goliath snapshot_count none default
goliath redundant_metadata all default
Pool info for supermicro (goliath):
root@datastor1:/goliath# zpool status
pool: goliath
state: ONLINE
scan: resilvered 5.77T in 62h57m with 0 errors on Wed Feb 2 10:49:55 2022
config:
NAME STATE READ WRITE CKSUM
goliath ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c0t5000CCA26DC076F6d0 ONLINE 0 0 0
c0t5000CCA26DC06983d0 ONLINE 0 0 0
c0t5000CCA267C2B59Fd0 ONLINE 0 0 0
c0t5000CCA267C34DD8d0 ONLINE 0 0 0
c0t5000CCA267C38EA5d0 ONLINE 0 0 0
c0t5000CCA273DA0C9Fd0 ONLINE 0 0 0
c0t5000CCA27EC23929d0 ONLINE 0 0 0
c0t5000CCA273DBAFCEd0 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
c0t5000CCA273DC9BA5d0 ONLINE 0 0 0
c0t5000CCA273DCF74Ed0 ONLINE 0 0 0
c0t5000CCA273DD5EE8d0 ONLINE 0 0 0
c0t5000CCA273DD8A5Dd0 ONLINE 0 0 0
c0t5000CCA273DD9AE6d0 ONLINE 0 0 0
c0t5000CCA273DD885Ad0 ONLINE 0 0 0
c0t5000CCA273DDD913d0 ONLINE 0 0 0
c0t5000CCA273DFD987d0 ONLINE 0 0 0
Pool info for r510 (ringo):
root@kylefiber:/ringo# zpool status
pool: ringo
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ringo ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c0t5000CCA252C85F49d0 ONLINE 0 0 0
c0t5000CCA252C93E77d0 ONLINE 0 0 0
c0t5000CCA252C93E83d0 ONLINE 0 0 0
c0t5000CCA252C861ADd0 ONLINE 0 0 0
c0t5000CCA252C920E0d0 ONLINE 0 0 0
c0t5000CCA252C960E3d0 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
c0t5000CCA252C8564Ed0 ONLINE 0 0 0
c0t5000CCA252C93595d0 ONLINE 0 0 0
c0t5000CCA252CB0D97d0 ONLINE 0 0 0
c0t5000CCA252CB51FAd0 ONLINE 0 0 0
c0t5000CCA252CBA6ABd0 ONLINE 0 0 0
c0t5000CCA252CC4F15d0 ONLINE 0 0 0
Connection settings in Proxmox are identical. Disk settings are identical (discard on and cache is default-none).
Speed test directly on r510 (slower one):
root@kylefiber:/ringo# dd if=/dev/zero of=/ringo/dd.tst bs=32768000 count=3125
3125+0 records in
3125+0 records out
102400000000 bytes transferred in 67.263680 secs (1.42GB/sec)
root@kylefiber:/ringo# dd if=/ringo/dd.tst of=/dev/null bs=32768000 count=3125
3125+0 records in
3125+0 records out
102400000000 bytes transferred in 42.449120 secs (2.25GB/sec)
Speed test directly on Supermicro (faster one):
root@datastor1:~# dd if=/dev/zero of=/goliath/test.file bs=32768000 count=3125
3125+0 records in
3125+0 records out
102400000000 bytes transferred in 36.609472 secs (2.60GB/sec)
root@datastor1:~# dd if=/goliath/test.file of=/dev/null bs=32768000 count=3125
3125+0 records in
3125+0 records out
102400000000 bytes transferred in 13.164774 secs (7.24GB/sec)
I can get the results of the dd tests from the VM for each disk, but when I ran them earlier, the write speed for the r510 was like 72MB/sec in the VM and the write speed for the supermicro disk was around 400 MB/sec.
What could be contributing to the large difference in speed?
Additional note: the faster pool only has 8GB RAM. Slower pool has 24GB RAM.
Pics of connection info:


Thanks for any help.