Some advice for LVM cache

WhiteTiger

Member
May 16, 2020
86
2
13
Italy
I am always continuing my tests and still need some advice.
My configuration is as follows:
  • 1 128GB SDD (for Boot)
  • 4 x 1TB HDD (for RAIDZ2)
  • 1 x 500GB HDD (for internal backup)
  • 1 port SATA free for a second 128GB SDD or a second 500GB HDD
The memory is 16GB and the users who will use it are 3.

I would like to activate the cache for LVM on the drives in RAIDZ, but I have not understood how to activate it.
Should I do this during installation or afterwards?
Can I use a partition on the boot SSD?
If I can avoid installing a second 128GB SSD I would use the free SATA port for a second 500GB HDD.

Thanks in advance.
 
I would like to activate the cache for LVM on the drives in RAIDZ, but I have not understood how to activate it.
I think you mean ZFS caching. You don't use LVM if you are using ZFS. Caching is most of the time not that useful:

Read cache (L2ARC) is only useful if you already maxed out your RAM. If it is possible to add more RAM, buy more RAM instead. RAM is faster than SSDs so you would slowdown your reads if you use a SSD as cache instead of buying more RAM.

Write cache (SLOG) is ONLY used for sync writes. If you are using DBs with sync writes that might be useful. In 99% of the cases the programs are using async writes and these wouldn't be cached. Also using a SSD as write cache will wear out that SSD really fast and it might die within months. And if it dies and you are not using two mirrored SSDs as cache and you encounter a power outage, you will loose all data on that HDDs you want to speed up.

The third thing is a SSD as "special device" so your meta data and small files don't need to be stored on the slow HDDs. But again...if the SSD dies you loose all data on that HDDs so these should be mirrored too.
Should I do this during installation or afterwards?
Afterwards. Can be added/removed at anytime using the zpool command.
Can I use a partition on the boot SSD?
Yes, but your boot drive will die much faster.
 
Read cache (L2ARC) is only useful if you already maxed out your RAM.
Just read yesterday that L2ARC actually seems to be nonsense most of the time.
It is much larger and therefore needs a good portion of ARC to index. So in fact one would give up a big amount (that's what I understood) of RAM for compared slow read cache.
Wasn't aware of that but it makes sense so I thought I'll share...


And if it dies and you are not using two mirrored SSDs as cache and you encounter a power outage, you will loose all data on that HDDs you want to speed up.
This seems to have changed.
According to some statements I found here:
https://www.reddit.com/r/freenas/comments/cd3t4m/is_a_slog_zil_cache_nvme_ssd_worth_it/
Code:
The recommendation to mirror your SLOG came from a time when losing your SLOG meant your whole pool failed and lost all of its data. That issue has been patched out of ZFS,...

Found various other statements of the same type in other places. Today it seems more the question if you can sustain the possible performance drop if the SLOG dies.

I just have reconfigured my pools to use a (single) NVRAM slog which I got from eBay and hence was asking me the same question that is discussed here:
https://www.truenas.com/community/t...ror-your-slog-zil-drive-recommendation.23445/
 
I'm confused.
I was reading forum posts about FreeNAS using ZFS and where pool caching was recommended.
In Debian I already use an LVM Group created on 4 disks in RAID5 and the cache on an SSD.

Furthermore, about SSD mirroring, it was not recommended to me because if the motherboard is not that of a professional server, if the first SSD fails anyway I have to go to the BIOS to select the second SSD.

My mobo can potentially support 32GB, but there are only 8GB DDR3 banks on the market, so at most I get to 16GB.
 
Reading through your post/thread I am confused as well.
Maybe I have misunderstood what you try to achieve.

You have / use a ZFS RAIDZ. Is that correct?
What do you want to do with the LVM? Inside the VM? Cache there again?

My mobo can potentially support 32GB, but there are only 8GB DDR3 banks on the market, so at most I get to 16GB.
Dont get the issue here. There are tons of 16 GB modules according to this:
https://geizhals.de/?cat=ramddr3&xf=1454_16384~15903_DDR3
Even SO-DIMMS are available?!
 
I am now starting to configure the Proxmox server.
(and in the meantime I already have a first problem with RAIDZ2; I have now opened a new post)

The server is configured as I said initially:
  • 4HDD from 1TB WD Blue to create RAIDZ2
  • 1 SDD 120GB Citrix BX500 for the Boot disk. I created this using only 45GB (15 for Root, 16 for Freemin and 4 for Swap); the rest of the space is free and I wonder if I'm wrong.
The other two discs are undecided on how to use them.
I have 2 500GB HDDs (WD5000BEVT and WD5000BPVT) and another 120GB Citrix BX500 SSD.
I definitely wanted to use one for a backup, but I don't already understand how to do it.

There are three people using it, so I don't need a performing machine.
It is a CPU with 4 Cores. Now it has 16GB and I see if it is possible to extend it.

My goal is to replace it later with a better performing server.
Now that I'm just starting out, I prefer to work with this setup and understand how best to configure the disks right now.

Since I'm just getting started, I can still delete everything and reinstall it if necessary

==== Update

In the end I changed the configuration:
  • 2 x 128GB SDDs in RAID1 for boot
  • 4 x 1TB HDDs in RAID 10
  • 1 x 750GB HDD for other uses.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!