SSD Cache

I know you have much more experience with ZFS than I do, but let me a bit picky (or wrong?):

In my understanding there might be up to three TXGs active - and "active" for me means they occupy the storage and/or(?) Ram for the data they are handling in that moment.

Cited from delphix.com/blog/zfs-fundamentals-transaction-groups :

"... There are three active transaction group states: open, quiescing, or syncing. At any given time, there may be an active txg associated with each state; each active txg may either be processing, or blocked waiting to enter the next state. There may be up to three active txgs, ..."

This looks like it would possibly need space for up to 3*5 seconds...
Yes, my mistake, the estimate has to be trippeld.
 
Hang on, the random writes into the pool that the slog will be useful for - the slog itself acts as sequential file object ... so I do not think I am wrong looking at seq numbers on any ZIL candidate device.
the important factor is how fast you can write a single block, that's the IOPS value. You will not have a lot of bulk writes, so sequential write performance does not matter at all, not for the ZIL. Most sequential loads are asynchronous (e.g. copying a file) and therefore bypass ZIL.
 
Most sequential loads are asynchronous (e.g. copying a file) and therefore bypass ZIL.

I agree to this, I was not referring to those writes.

the important factor is how fast you can write a single block, that's the IOPS value. You will not have a lot of bulk writes, so sequential write performance does not matter at all, not for the ZIL.

But the ZIL itself is sequential...
 
I've heard a bit on YouTube about ssd caching, and I'm wondering if it's possible in proxmox. In my case, I have 7 600gb HDDs set up in a raid 6 array, and would like to set up a 1tb ssd for cache.
Mmh, funny discussion about different zfs ssd devices adding to a zpool as the user asked for adding a cache ssd to a raid6 ...
I think he searched for somethink like that - in case for using just lvm(/-thin) volume in pve, but works also with a filesystem on top,
this config readcache only and a broken ssd doesn't matter (like a L2arc cache), exchange your device-names in the "<...>" sections :
wipe ssd first when it's not new.
parted --script /dev/<ssd> mklabel gpt mkpart primary 1MiB 20GiB mkpart primary 20GiB 100%
blockdev --getsz /dev/<raid6>
dmsetup create <wish-name> --table '0 <output_from_blockdev> cache /dev/<ssd>p1 /dev/<ssd>p2 /dev/<raid6> 512 1 writethrough default 0'
If you search for metadata special device that's available when using xfs in top and vm's as qcow2 files instead of block storage.
PS: xfs special (it's name is inode allocator) is avail since beginning in '94 long before ssd on market or zfs developed ... :)
 
Last edited:
dmsetup create <wish-name> --table '0 <output_from_blockdev> cache /dev/<ssd>p1 /dev/<ssd>p2 /dev/<raid6> 512 1 writethrough default 0'

Last time I was experimenting with this (long ago), I remember getting weird messages about duplicate devices from btrfs.

If you search for metadata special device that's available when using xfs in top and vm's as qcow2 files instead of block storage.

Can you expand on this (the qcow2)?

PS: xfs special is avail since beginning in '94 long before ssd on market or zfs developed ... :)

SGI's OCTANE with IRIX was the last thing I would have called "a" workstation. ;)
 
Last edited:
I hope he didn't do btrfs on raid6 ... and even not btrfs-raid6 which is like a las vegas one arm slot machine ...
Can you expand on this (the qcow2)?
metadata special is for the xfs where is qcow2 files lay and has nothing to do with given or not given qcow2 functionality so don't understand this question. 1 qcow2 file is 1 inode, when you make a reflink snapshot you have 2 inodes of this.

What about a 3 full rack onyx 3000 with 2 infinite reality graphics, good for working in the winter garden in sibirien also ... :)
 
Last edited:
metadata special is for the xfs where is qcow2 files lay and has nothing to do with given or not given qcow2 functionality so don't understand this question.

I just did not understand originally what you were getting at with:

If you search for metadata special device that's available when using xfs in top and vm's as qcow2 files instead of block storage.
PS: xfs special (it's name is inode allocator) is avail since beginning in '94 long before ssd on market or zfs developed ... :)

So all clear now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!