Hi,
I'm trying to set up a proxmox node with a ZFS pool and I got very bad performances. I'm a bit new to zfs and I think I need some help !
Symptom: when I try to copy a few GB to the server, it starts at full speed, then pause for 5-10 seconds, then starts again for 1-2s, then pause again etc.
the average speed is very low. I mean very very low.
This is a raidz1 setup, 4 x 8TB SATA, 128GB SSD for ZIL & Cache, another 128GB SDD for OS & swap
The node is a Supermicro 2U server, 2 x Xeon E5-2670 (2.60 GHz, 32 threads), 96GB RAM, Adaptec ASR-71605 in HBA mode.
Proxmox 6.2-4, Linux 5.4.34
I was thinking of a source issue but I tried with different clients and I got the same behaviour when doing a rsync from a linux box, a simple copy with the finder of an ordinary macbook pro or a serious test from my dev desktop.
I got a openMediaVault VM running on top of proxmox (2 cores, 16GB RAM) which handles the smb share but I don't think this is related to the issue.
Before installing OMV I tried to set up a samba share directly on the proxmox node (without any vm) and the perfs were not better.
I was also thinking of a network issue (switch, cables?) so I put the server out of the main network. Now it runs isolated on its own private gigabit switch only shared with one workstation which I use to test copying from / to the server. same laziness, awful perfs...
I'd love to hear your advices !!
Thanks,
Fab
I tried zpool iostat -v 1
Sometimes I can get quite decent values, about 200MB/s for the RAIDZ1 pool, about 50MB/disk:
Then it just drops to zero:
I'm trying to set up a proxmox node with a ZFS pool and I got very bad performances. I'm a bit new to zfs and I think I need some help !
Symptom: when I try to copy a few GB to the server, it starts at full speed, then pause for 5-10 seconds, then starts again for 1-2s, then pause again etc.
the average speed is very low. I mean very very low.
This is a raidz1 setup, 4 x 8TB SATA, 128GB SSD for ZIL & Cache, another 128GB SDD for OS & swap
The node is a Supermicro 2U server, 2 x Xeon E5-2670 (2.60 GHz, 32 threads), 96GB RAM, Adaptec ASR-71605 in HBA mode.
Proxmox 6.2-4, Linux 5.4.34
I was thinking of a source issue but I tried with different clients and I got the same behaviour when doing a rsync from a linux box, a simple copy with the finder of an ordinary macbook pro or a serious test from my dev desktop.
I got a openMediaVault VM running on top of proxmox (2 cores, 16GB RAM) which handles the smb share but I don't think this is related to the issue.
Before installing OMV I tried to set up a samba share directly on the proxmox node (without any vm) and the perfs were not better.
I was also thinking of a network issue (switch, cables?) so I put the server out of the main network. Now it runs isolated on its own private gigabit switch only shared with one workstation which I use to test copying from / to the server. same laziness, awful perfs...
I'd love to hear your advices !!
Thanks,
Fab
I tried zpool iostat -v 1
Sometimes I can get quite decent values, about 200MB/s for the RAIDZ1 pool, about 50MB/disk:
Code:
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
tank0 13.9T 15.2T 0 677 0 207M
raidz1 13.9T 15.2T 0 597 0 197M
sdc - - 0 152 0 49.4M
sde - - 0 138 0 49.4M
sdf - - 0 151 0 49.4M
sdg - - 0 154 0 49.2M
logs - - - - - -
sdb1 1.46G 6.04G 0 79 0 9.98M
cache - - - - - -
sdb2 32.7G 78.5G 0 80 0 10.1M
Then it just drops to zero:
Code:
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
tank0 13.9T 15.2T 0 74 0 3.22M
raidz1 13.9T 15.2T 0 50 0 232K
sdc - - 0 14 0 67.9K
sde - - 0 12 0 59.9K
sdf - - 0 14 0 67.9K
sdg - - 0 5 0 24.0K
logs - - - - - -
sdb1 1.82M 7.50G 0 23 0 3.00M
cache - - - - - -
sdb2 32.9G 78.3G 0 23 0 3.00M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
tank0 13.9T 15.2T 0 23 0 3.00M
raidz1 13.9T 15.2T 0 0 0 0
sdc - - 0 0 0 0
sde - - 0 0 0 0
sdf - - 0 0 0 0
sdg - - 0 0 0 0
logs - - - - - -
sdb1 1.82M 7.50G 0 23 0 3.00M
cache - - - - - -
sdb2 32.9G 78.3G 0 22 0 2.87M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
tank0 13.9T 15.2T 0 25 0 3.24M
raidz1 13.9T 15.2T 0 0 0 0
sdc - - 0 0 0 0
sde - - 0 0 0 0
sdf - - 0 0 0 0
sdg - - 0 0 0 0
logs - - - - - -
sdb1 1.82M 7.50G 0 25 0 3.24M
cache - - - - - -
sdb2 32.9G 78.3G 0 25 0 3.24M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
tank0 13.9T 15.2T 0 396 0 49.5M
raidz1 13.9T 15.2T 0 0 0 0
sdc - - 0 0 0 0
sde - - 0 0 0 0
sdf - - 0 0 0 0
sdg - - 0 0 0 0
logs - - - - - -
sdb1 1.82M 7.50G 0 396 0 49.5M
cache - - - - - -
sdb2 32.9G 78.3G 0 271 0 33.7M
---------- ----- ----- ----- ----- ----- -----