Hey! I just purchased and recieved a used Intel 400GB S3710 SSD and set that up a SLOG device. I see consistent 130MB/s write speeds to my ZFS pool w/ the 2 4TB drives in a mirror. I believe the optane drives are significantly more performant than the Intel SSDs, so definitely let me know what...
Quick updater. Disabling sync for the pool seems to fix the issue. I tested a 10GB file and it wrote at 300MB/s the whole time, and I can see that the drive is being written to at 160-200MB/s in iostat. Writes in iostat continue to for 15s-30s after the write in file explorer stops, so as...
Sorry, still no resolution. I've tested with NVME drives lately and saw the same exact slow write speeds as any other spinning drive. Peaks were obviously higher, but the drives are capable of sustained writes in excess of 300MB/s especially since I'm only testing with 10G files. And if I were...
Hi everyone,
I use Jumpcloud's Linux agent to create a local user (admin) account on my Proxmox nodes that will wait for MFA to be completed before letting you login to the account. This works great, if I SSH to the Proxmox node with that account, it says "Waiting for phone authentication"...
Same behavior post migration using 16k recordsize and blocksize
Average 20-30MB/s, some peaks to 50-70MB/s. This is with sync set to always. With sync set to standard, I get great xfer speeds for 25% of the transfer, then it stops, then good again for a bit, then stops, then settles out back to...
Migrating a VM to the host w/ blocksize 16k block size and w/ sync=standard instead of always, it seems to level out at around 150MB/s to 250MB/s, about the speed of one drive, a bit more.
Will see if SMB is fixed on the VM when it finishes migrating. So ZFS must be configured fine if...
Fair points! And honestly I have a hard time believing it's relating to not having a SLOG. Even if I set sync=never, the writes slowly filter out of memory at the same speeds, around 20MB/s, same as doing a copy on a guest or running ATTO or FIO. I have 64GB of DDR4 memory on that node as well...
SlimTom, thank you for your response! Yes, I've previously tried varying block sizes, including 16K. atime has been disabled. I'll try enabling relatime as well and let you know how that goes.
Nuke Bloodaxe, thank you as well. Yes, I've considered adding an enterprise SSD as a SLOG but for now...
OK, looked into this a bit and tried disabling write caching in windows. When I did that before, it would cause the transfer to drop to 0 way more often, and SMB transfers dropped. So ATTO is showing 250MB/s at the higher IO sizes:
However, this should be closing to 400-500MB/s. iostat 1 shows...
Hey SlimTom,
Unfortunately, no change. I even added two more drives to mirror (so i think it's two mirrors, striped), and I see OK performance but it falls back to 50ish MB/s and occasionally 0.
Peak is 400-500MB/s, so about what you'd expect for two of these drives if they were RAID0'd...
Thanks for the suggestions.
I ran FIO in the SMB VM and saw that speeds were at ~120-140MB/s, with frequent (every 2-3s) dips to 30-60MB/s. Writes should be in the neighborhood of 180-200MB/s without these deeps, as we'll see below.
I'm going to test a 4k record/blocksize now, which requires...
Hi Dunuin and LnxBill! Thanks for the info. For what it's worth, sync has been set to always for a little while now. I believe when I had it off it was for testing.
Also, I've done some previous testing to confirm that the testing I do within the OS (Crystaldiskmark or a file transfer from the...
Still smacking my head against this but found some more useful information -
It seems that when doing any of these write tests the ZFS array is actually getting read while I'm doing writes. I'm going to review the SMB config but I'm pretty sure this is behavior is happening even when doing...
ugh, sorry for the misinfo- sync was set to standard so I wasn't seeing the issue as available RAM was really high and the cache never filled for long. I still ended up testing different block sizes which had no effect with sync disabled, consistently got about 60MB/s writes. Reads are great...
Alright, I think I figured out the issue. I manually created a new dataset on an existing zpool and it had 128k volblocksize by default. I mounted that, added it to the Shared PC Windows VM i have, and now I get ~100-120MB/s consistently. Testing the zvol that was created by proxmox...
Update: just tried creating a new ZFS pool on the one where I created the test LVM pool. I also turned off compression as a test. Same performance, I used Crystaldiskmark for both tests with a 16GB test file. Task manager shows the speeds going up to 120-160MB/s as expected for a second, then...
Once again, thank you for the info, CGC!
I just tested one of the drives outside of the mirror using LVM and I see around 220MB/s peak probably closer to 180MB/s sustained, which is close to the mfgr provided specs, and above what I would expect. So the drives are at least working properly...
OK, tested the suggested changes. Left side is with sync=always, right is sync=standard.
Speeds peak a little bit higher, but there are more frequent hard stops where no data is transmitted, and they're still below what I used to see for write speeds on the same drives. Interesting, making...
Hi CGC,
This is great info - thank you! Just to answer your direct questions first:
1). ZFS is only configured on the hosts. The guests use whatever the guest OS suggests as recommended defaults, usually LVM for Ubuntu.
2). I do use ashift=12. i've also tried ashift=9 for kicks but got the...
Okay so after looking into this some more, I don't think the cache referenced in those links is related. I have sync=always, so writes should flush immediately to disk, right? So you're definitely right that the pool isn't accepting more IO. As I mentioned above I tested copying on Linux and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.