Hi!
I've got a X9DRE-LN4F supermicro board with 128 GB DDR3, and 2x Xeon e5 2620. Not the newest server, but it runs.
I noticed a high IO delay, and went on an investigation.
Specs:
- proxmox 6.3
- 2 x 1TB Samsung 850 pro 1TB in zfs mirror (configured during install)
BIOS (and IPMI) firmwares updated to the latest to be found on the supermicro site)
The server also contains a NVME drive (with an adapter - no m2 slot on the board).
Whatever I do, I cannot get past 180MB/s for disk writes.
So when I do dd if=/dev/urandom BS=10M count=1024 of=/randomfile
speed is 180MB/s in the beginning. Gradually drops down to like 40MB/s.
EXACTLY the same pattern with the NVME drive. Weird, because that is another interface, right?
CPU is low during write.
I checked connections, right SATA ports etc... Seems to be allright (yes, connected to the SATA3 ports, not the SATA2 the board has).
Linux tells me the speed is 6gb/s. So that must be SATA3... but the speeds do not match.
Upon further investigation, I installed windows 10 pro bare metal on the machine... And guess what? I Do get the speeds I expect (crystaldiskmark)
SATA ssd: 562MB/s read | 517 MB/s write. That's what I expect from the samsung 850 Pro connected to SATA 3 port.
NVME: 742MB/s read | 296 MB/s write. not really what I expect, but still a lot more than what I get in proxmox.
Okay, maybe it is proxmox. So I installed fedora server on the machine - also bare metal.
Same slow speeds. Exactly the 180MB/s region again.
Everywhere I look, I understand that the C602 chipset should be supported by the kernel...
I also added an add-in card providing 4 sata ports. exactly the same speeds. I added an intel 100GB S3700 SSD. Exactly the same speeds...
So I feel something is capping the speed of the storage...
The goal for the server is to have a truenas running in a VM with a HBA passed through (pcie). That works, and gives me the speeds I expect of the classic spinning HDD's. I wanted to add the NVME drive as a cache to that truenas VM, but same result. Also in truenas it is slow.
as a final test, I passed though the nvme drive to a windows VM that I installed on the proxmox via PCIe passthrough. Running crystal disk mark in that vm gives me about the same results as I get on that NVME drive when I'm running bare metal windows on that server...
So, that leads me to the conclusion it must be the Linux kernel that does not allow the full speed...?
any Ideas or suggestions that might lead into the right direction highly appreciated...!
I've got a X9DRE-LN4F supermicro board with 128 GB DDR3, and 2x Xeon e5 2620. Not the newest server, but it runs.
I noticed a high IO delay, and went on an investigation.
Specs:
- proxmox 6.3
- 2 x 1TB Samsung 850 pro 1TB in zfs mirror (configured during install)
BIOS (and IPMI) firmwares updated to the latest to be found on the supermicro site)
The server also contains a NVME drive (with an adapter - no m2 slot on the board).
Whatever I do, I cannot get past 180MB/s for disk writes.
So when I do dd if=/dev/urandom BS=10M count=1024 of=/randomfile
speed is 180MB/s in the beginning. Gradually drops down to like 40MB/s.
EXACTLY the same pattern with the NVME drive. Weird, because that is another interface, right?
CPU is low during write.
I checked connections, right SATA ports etc... Seems to be allright (yes, connected to the SATA3 ports, not the SATA2 the board has).
Linux tells me the speed is 6gb/s. So that must be SATA3... but the speeds do not match.
Upon further investigation, I installed windows 10 pro bare metal on the machine... And guess what? I Do get the speeds I expect (crystaldiskmark)
SATA ssd: 562MB/s read | 517 MB/s write. That's what I expect from the samsung 850 Pro connected to SATA 3 port.
NVME: 742MB/s read | 296 MB/s write. not really what I expect, but still a lot more than what I get in proxmox.
Okay, maybe it is proxmox. So I installed fedora server on the machine - also bare metal.
Same slow speeds. Exactly the 180MB/s region again.
Everywhere I look, I understand that the C602 chipset should be supported by the kernel...
I also added an add-in card providing 4 sata ports. exactly the same speeds. I added an intel 100GB S3700 SSD. Exactly the same speeds...
So I feel something is capping the speed of the storage...
The goal for the server is to have a truenas running in a VM with a HBA passed through (pcie). That works, and gives me the speeds I expect of the classic spinning HDD's. I wanted to add the NVME drive as a cache to that truenas VM, but same result. Also in truenas it is slow.
as a final test, I passed though the nvme drive to a windows VM that I installed on the proxmox via PCIe passthrough. Running crystal disk mark in that vm gives me about the same results as I get on that NVME drive when I'm running bare metal windows on that server...
So, that leads me to the conclusion it must be the Linux kernel that does not allow the full speed...?
any Ideas or suggestions that might lead into the right direction highly appreciated...!