zfs arc cache size

VGusev2007

Renowned Member
May 24, 2010
95
11
73
Russia
Dear all!

I have to plan use a local zfs pool as a my production local_storage.

So, I have configure log + cache on a SSD drive. It works very very fast and stable.

What about configure ARC size?

I have 48GB of RAM

From the default setting ARC is: from 32M to 24GB. I want to run about 35 GB for my VM + KSM. So, do I need to reduce ARC cache or it will be free automatic when kvm needs it for create a new VM (e.g.)?

Best regards, Viktor.
 
You can see:
I suggest you to limit it, not to trust in ZFS ARC ram release mechanism.
I have doubt about it. I want to ask about this from a maintainer of proxmox. Proxmox distro has no tuned about zfs arc size. Why?

I have two variant about it:

The first is: that is a normal and maintainers of proxmox know about that is good.
The second: maintainers of proxmox are bad guys and they don't worry about it.

I hope, the second is bad variant.
 
@Nemesiz I plan on picking up servers from OVH with 128GB of ram and two 2TB magnetic disks and two 300GB SSDs with dual Intel E5-2630v3. The use case will be a software development and test environment. Non production.

I was going to run RAID1 on the magnetic disks, and L2ARC on one SSD and ZIL on the other SSD. Given this setup, what would you recommend I set my ARC size to be? I'd like to maximize the amount of ram for VMs while still maintaining enough IOPS for VMs to perform well (near ssd speeds)

Would you suggest an alternative layout for the SSDs and L2ARC/ZIL? Or the magnetic disks too? I hear that ZIL only needs to be a maximum of 32gb but L2ARC is better the bigger it is. But if I partition SSDs for ZIL and L2ARC doesn't it reduce performance? What about trim support too?

Thanks for your guidance!
 
Dear jdrews, please note, that ARC/L2ARC - are ADAPTIVE cache. So, I think you can't get the speed about SSD from your magnetic drives! For ZiL (in general) will be enought 3GB part. ZiL is use ONLY for event of power cut situation.

Zfs will move from ARC to L2ARC WHEN ARC is FULL, e.g. I think it will be good to reduce RAM from VMs, and get this RAM to ARC.

In a generic env it will be good for: 2-3 GB for ZiL and 20-70gb for L2ARC and MAX RAM for ARC. - But you can use SSD with poweroff data protect tech e.g.: SSDSC2BX100G401 - you can mirror ZiL part and use all space for OS (mdadm + lvm) and L2ARC.

It was ONLY MY OPINION. Please correct me if I bad wrote about anything.
 
Last edited:
You can see:

I have doubt about it. I want to ask about this from a maintainer of proxmox. Proxmox distro has no tuned about zfs arc size. Why?

I have two variant about it:

The first is: that is a normal and maintainers of proxmox know about that is good.
The second: maintainers of proxmox are bad guys and they don't worry about it.

I hope, the second is bad variant.

I would say PROXMOX team are focused in something and they don`t spend time in other things.
As for ZFS I started to use it with PROXMOX a long time ago, before proxmox added support of ZFS.
Thanks to them now to use VM with ZFS become more easier but still they left some work to users.
So if you decided to use ZFS you will have to learn by experimenting of reading others stories.

@Nemesiz I plan on picking up servers from OVH with 128GB of ram and two 2TB magnetic disks and two 300GB SSDs with dual Intel E5-2630v3. The use case will be a software development and test environment. Non production.

I was going to run RAID1 on the magnetic disks, and L2ARC on one SSD and ZIL on the other SSD. Given this setup, what would you recommend I set my ARC size to be? I'd like to maximize the amount of ram for VMs while still maintaining enough IOPS for VMs to perform well (near ssd speeds)

Would you suggest an alternative layout for the SSDs and L2ARC/ZIL? Or the magnetic disks too? I hear that ZIL only needs to be a maximum of 32gb but L2ARC is better the bigger it is. But if I partition SSDs for ZIL and L2ARC doesn't it reduce performance? What about trim support too?

Thanks for your guidance!

If you know your system is protected from power lost then I suggest you to turn off ZIL and sync writes. Otherwise you need ZIL of size ~ 5sec * max device write speed. Keep in mind ZIL do not write sequential.
It write from beginning then stops and starts again from beginning. It is knife for SSD write cycles. And you SSD must be protected from power lost data corruption.

As for ram ARC cache: more you have - more read speed and less delay you get.
max ARC size setting is dynamical so you can grow up or shrink in any time. I suggest you to start with 10G size. If you will get slow read speed - make it bigger.

ZFS use dynamically L2ARC and it may not use all space of L2ARC (btw data in L2ARC is compressed) . You can add one partition and if it become full add another partition to extend L2ARC.
 
So, DEAR ALL! THANK a lot for your answer to me!

I have a lot of google about it. And I can say my final opinion about that question:

So, zfs in proxmox doesn't have any preconfigured settings yet. You need to know what about you doing!

The first is about ZiL:


  1. ZiL must in a mirror partition
  2. Drive partition for ZiL must have power cut data protection tech (e.g. Intel DC S3610 Series)
  3. ZiL size must be as least: max your drive spead*5 (because zfs sync ZiL do drive every 5sec in default mode), but you don't need much more about this formula as is not be in used (it comes from this topic)!

The second is about L2ARC:

  1. You don't need a big L2ARC, if you don't have a free RAM! - There is no magic and you will need to have a map for L2ARC in your RAM! There is about 1GB RAM per 5GB L2ARC. (it comes from: http://serverfault.com/questions/652311/zfs-on-linux-kvm-steals-memory)

The end is about ARCsize:

  1. ZoL can freeup it! Yep!
  2. There is no magic in that process too... :( The ZFS arc does not shrink immediately. However (ZFSonLinux) it is reclaimed while applications allocate that memory - as usual (it comes from: http://serverfault.com/questions/581669/why-isnt-the-arc-max-setting-honoured-on-zfs-on-linux). - So, if you have a bad configure ARC size or stop and after some time run your VM again you wll have an error message like this: "unable to allocate memory". I don't know about: echo xxxx >> /sys/module/zfs/parameters/zfs_arc_max. Will it free memory immediatly or not? I think you don't need to have an experement in production state with it. Just configure ARC as you need!

If you want to have a my case... Welcome:

I have:

2 SSD: SSDSC2BX100G401
6 HDD: SAS 10k rpm (raid 10)
64GB RAM

I will to use the following scheme:


OS:
  1. software raid1 for proxmox OS (mdadm): md0 - 30gb (SSD drive)
  2. software raid1 for swap (mdadm): md1 - 10gb (SSD drive) - just mkswap /dev/md1 -Lswap
  3. LVM: vg for OS: 30GB
  4. LVM: lvol for OS: 15GB
  5. free space in my vg is 15GB for snapshots (e.g. I will use it for backup or upgrade proxmox)
ZFS ARC:

I will reduce ARC from 32 mb to max 4GB

ZLog:

  1. My disks will have about: 250MB/sec + cache 32 or 64 mb. I think it will be good if I have: 3(drives in my raid10)*250(speed)*7(sec)=5.1GB, I will 5GB. I don't need more because my sync write is really small (not 1GB/sec!).
  2. So, it will be 5GB mirror part from my SSD drive.

L2ARC:

If it wll be compressed (it comes from this thread), I think it will good to have about spend my for L2ARC: 3GB.

So, it will be: 15GB (7gb from one SSD + 7gb from two SSD drive).

I suppose I will have about: 30GB data (if compressed level will be 2.00. In general) with spend of 3GB my RAM.

ZFS:

  1. I will enable lzo compression for my zpool.
  2. zfs set atime=off myzpool


Fell free to post if you want to correct me!
 
Last edited:
How do you use LVM and ZFS? ZFS on top of LVM?
I don't have many slots in my server, so I have devied SSD: mdadm+LVM is just for OS only and other space on my SSD for ZiL+L2ARC. Do you know about best block size for zvol? And what about default blocksize zvol when I create a vm by proxmox gui?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!