Weird performance issues of two disks in Mirror mode - beginner question

Mar 4, 2022
33
0
11
26
Machine is a reused gaming PC

R5 1600
16GB 2133 RAM
Gigabyte AB350 Gaming 3
Proxmox installed on a 120GB SATA SSD

Disks in questions are two WD Red Plus 4TB (yes CMR variants, wd40efzx) running in Mirror mode.
https://i.imgur.com/zlCw5HF.png

Drives are assigned to a Ubuntu server VM whose only job is to be a SAMBA server
https://i.imgur.com/EpNFrxf.png

Unfortunately, when writing (and only when writing) I get these weird performance spikes, where it goes gigabit for a few seconds and completely stops for a few seconds. Could it be the drives cache running out? I don't have the knowledge of testing those drives under Linux directly, and I didn't think to test them under Windows. No transfer spikes when reading
https://i.imgur.com/DkWo0Ua.png

Crystaldiskmark of the share from a Windows client https://i.imgur.com/dY9I6vJ.png

Or is there an issue with my configuration somewhere? Or is this normal?
 
You didn't told us how you do your mirroring.

ZFS on host / ZFS inside ubuntu VM / HW raid / Pseudo HW Onboard Raid / SW raid using mdadm on host / mdadm inside unbuntu VM / ...?
Did you passthrough the disks to the ubuntu VM using PCI passthrough, "qm set" disk passthrough or by just using virtual disks stored on the host?

You didn't showed us what caching modes your ubuntu VM uses. So outputs like qm config <VMIDofYourUbuntuVM> and cat /etc/pve/storage.cfg would be useful.
 
You didn't told us how you do your mirroring.

ZFS on host / ZFS inside ubuntu VM / HW raid / Pseudo HW Onboard Raid / SW raid using mdadm on host / mdadm inside unbuntu VM / ...?
Did you passthrough the disks to the ubuntu VM using PCI passthrough, "qm set" disk passthrough or by just using virtual disks stored on the host?

You didn't showed us what caching modes your ubuntu VM uses. So outputs like qm config <VMIDofYourUbuntuVM> and cat /etc/pve/storage.cfg would be useful.
Mirroring the only way I know of, through the "ZFS" menu under node/Disks

Pool added to VM with VM/hardware/Add/Hard Disk/Storage Dropdown to pool1 and max out capacity

I'm guessing that is not the correct way to do it?

qm config 101 https://pastebin.com/U8VkZ3BJ

cat storage.cfg https://pastebin.com/SDxAwN8u

Though pretty much all of this info is in the screenshots above?
 
Last edited:
First of you are using the virtual disks as "sata" but you selected virtio SCSI as controller. Your virtual disks should be using the faster SCSI as protocol too and not SATA.

And then your virtual disk is maybe too big. A ZFS pool should always have 20% of free space left or it will become slow. So if you got a mirror of 4TB HDDs your virtual disk shouldn't be bigger than 2.91 TiB (which is equal to 3.2 TB). If you enabled Thin-Provisioning you can keep that virtual disk at 4TB size but then you need to make sure that you don'T fill that virtual disk more then 80%.

And ZFS needs alot of RAM. The more RAM you allow ZFS to use, the faster your Pool will be. With only 2x 4TB of HDDs you prabably want around 4-12 GB RAM just for the ARC and I guess you ARC isn't that big if the entire server just got 16GB of RAM. And it looks like you still got even mir disks in there used by ZFS, so you need even more RAM.

When writing to a SMB share ZFS basically first caches all writes in RAM, so you see great performance. Every 5 seconds ZFS will try to write the cached data from RAM to the HDDs. If the HDDs can't keep up and the RAM fills faster then then HDDs are able to write to disk, then the file transfer will stop and wait until there is free RAM again for write caching.
 
Last edited:
First of you are using the virtual disks as "sata" but you selected virtio SCSI as controller. Your virtual disks should be using the faster SCSI as protocol too and not SATA.

And then your virtual disk is maybe too big. A ZFS pool should always have 20% of free space left or it will become slow. So if you got a mirror of 4TB HDDs your virtual disk shouldn't be bigger than 2.91 TiB (which is equal to 3.2 TB). If you enabled Thin-Provisioning you can keep that virtual disk at 4TB size but then you need to make sure that you don'T fill that virtual disk more then 80%.

And ZFS needs alot of RAM. The more RAM you allow ZFS to use, the faster your Pool will be. With only 2x 4TB of HDDs you prabably want around 4-12 GB RAM just for the ARC and I guess you ARC isn't that big if the entire server just got 16GB of RAM. And it looks like you still got even mir disks in there used by ZFS, so you need even more RAM.

When writing to a SMB share ZFS basically first caches all writes in RAM, so you see great performance. Every 5 seconds ZFS will try to write the cached data from RAM to the HDDs. If the HDDs can't keep up and the RAM fills faster then then HDDs are able to write to disk, then the file transfer will stop and wait until there is free RAM again for write caching.
Hold on, you are saying the "1GB RAM per 1TB storage" is an underestimate??
 
There are several rules of thumb depending who you ask.

PVE: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage
As a general rule of thumb, allocate at least 2 GiB Base + 1 GiB/TiB-Storage. For example, if you have a pool with 8 TiB of available storage space then you should use 10 GiB of memory for the ARC.

FreeNAS: https://www.ixsystems.com/documentation/freenas/11.3-U5/intro.html#hardware-recommendations
A minimum of 8 GiB of RAM is required. (But this is for TrueNAS OS, not just for ZFS)
An old, somewhat overstated guideline is 1 GiB of RAM per terabyte of disk capacity.
...
For ZFS deduplication, ensure the system has at least 5 GiB of RAM per terabyte of storage to be deduplicated.

By default ZFS will use 1GB minimum and up to 50% of your total RAM as maximum for its ARC size. The more RAM you allow ARC to use the faster your pool (atleast for SATA/SAS) becomes. The more L2ARC you use the more RAM you need. If you want to use deduplication you need massive amounts of RAM to store all the deduplication tables.
Only way to find out if you got enough ARC or not is to benchmark your pools and monitor the ARC while that is running with arc_summary. If your hit rates are too low or your caches run out of space your ARC is too small.

And sounds like you got 3 pools with atleast 11 TB (4TB+4TB+3TB) of HDDs and a server with only 16GB of RAM. 2GB RAM needs PVE, 9GB or even more the ARC (if you follow the "2 GiB Base + 1 GiB/TiB-Storage" of the PVEs rules of thumb), you want so RAM (for example 10% = 1.6GB ?) always free to keep the OOM Killer away so you got a combined 3.4GB RAM left for VMs/LXCs. But I guess you didn't changed the default ARC limits so your ARC will be somewhere between 1GB and 8GB. So the more VMs/LXCs you start, the more you force the ARC to shink and the slower your pools will get.

Run arc_summary and look for the line:
Code:
ARC size (current):                                   100.1 %    8.0 GiB
I would guess your current ARC size isn't that big.
 
Last edited:
https://pastebin.com/azqLtT8R

2GB. Ouch? So the only thing I can do is just buy more memory? I just want to avoid buying more hardware if there is a good chance it won't fix this specific issue I'm having.

Also I'm guessing ditching Proxmox for Freenas wouldn't help, since ZFS is ZFS on both

And do I seriously have to sacrifice capacity like that? I did this thing because my previous "setup" was two HDDs in a Windows Storage Space lol, I wanted to do it the "better way"
 
https://pastebin.com/azqLtT8R

2GB. Ouch? So the only thing I can do is just buy more memory? I just want to avoid buying more hardware if there is a good chance it won't fix this specific issue I'm having.
How what is free -h reporting when run from the host with all the guests running you usually do?
Also I'm guessing ditching Proxmox for Freenas wouldn't help, since ZFS is ZFS on both
Jup, won't change anything. ZFS will increase your reliability and data integritiy but need to sacrifice alot of storage capacity, performance and RAM for that.
And do I seriously have to sacrifice capacity like that? I did this thing because my previous "setup" was two HDDs in a Windows Storage Space lol, I wanted to do it the "better way"
Doing it better doesn't mean it has to be cheap. ;)
Same as with backups. If you think you need 4TB more space you shouldn't buy a 4TB disk. You should buy 3 of them so you got 2 backups of it.
As long as everything is working fine you might think that it is expensive to buy everything 3 times. But when you loose your entire ZFS pool because of a power outage or something similar you will be happy that you paid the price for the additional backup drives. And yes, raid never replaces a backup. Your mirrored 4TB pool doesn't count as a backup and you should have two additional copies of that 4TB so that you actually need 16TB of capacity to securely store 3.2 TB of data.
 
While I somewhat agree with what you are saying, I also don't like it. If I had the money to buy 16TB of storage, I would have done it already and they would be in the server.

>Doing it better doesn't mean it has to be cheap. ;)

In a home environment it has to be.

So what is the alternative? Unraid is paid, Proxmox and FreeNAS use ZFS... I don't want to go back to just running Windows with Storage Spaces lol

free -h
https://pastebin.com/HuYnA47k
 
So what is the alternative?

OpenMediaVault with mdraid or MergerFS and SnapRAID could be one.
Docker with Portainer is available and afaik a KVM-community-plugin too.
How good this all works, I don't know...

PS.: You could also use mdraid instead of ZFS on Proxmox, but that is officially not supported and therefore you are somewhat on your own.
 
OpenMediaVault with mdraid or MergerFS and SnapRAID could be one.
Docker with Portainer is available and afaik a KVM-community-plugin too.
How good this all works, I don't know...

PS.: You could also use mdraid instead of ZFS on Proxmox, but that is officially not supported and therefore you are somewhat on your own.
I tried OMV, I just couldn't get over the UI

As for rest of the message, "I know some of those words" lol
I'm not interested in Docker
 
Hi also a home user on low budget.

My setup for NAS via proxmox.

Cheap Asmedia AHCI addon card 4 SATA ports. Put in a slot which has its own IOMMU group.
Data drives that are in NAS storage pool are connected to that card.
Its directly given to the TrueNAS VM via passthrough.
The boot disks are 2 small QEMU virtualized on the SSD's I use for proxmox, host for that is lvm-thin, as I didnt want to do zfs on top of zfs.
The drives passed through are configured as data storage pool and perform fine, can saturate gigabit both directions without stalls.
8 gigs ram assigned to the VM, is 32gigs on the host machine.
OS boot disks are set as nocache and discard.
Virtio-scsi, cpu type host, virtio net driver.
Also have windows VM running, plus some a few bsd/linux for testing stuff.
 
So, would upgrading to 32GBs of RAM fix these issues (which I'm guessing happen because ZFS is running out cache) with my current 11TB of RAW storage?
 
Well, just to be curious I tried to copy over SSH instead of SMB, same behavior, so it isn't Sambas fault. I also tried to copy to the VM itself (VM is on an SSD, those two mirrored disks are mounted in /media) and no transfer speed issues. So there is definitely something wrong with either my pool or my drives. I still think they are running out of cache. I wouldn't be surprised if this happened on an SMR disk, but why CMR?

Btw it is still happening when Arc is at full 8GBs
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!