proxmox supports zfs. zfs supports "caching" drive.
Code:zpool add $ZC_NAME cache nvme-... zpool add $ZC_NAME log nvme-...
It is completely not supported, so you're on your own.
He will most certainly not get support for bcachefs even if he pays for it.What does that even mean for a non-subscription user, the "not supported"? Of course he cannot get support that he does not pay for.
I wasn't aware of that. I meant 'support' as in 'help'. What word would describe it better? I just used the word like it is used on the english proxmox ve homepage.Just saying, because to a lot of people something not supported means "it would not work" which is not the case.
If you RAID is on zfs zpool, you can add disk cache z your zpoolI've heard a bit on YouTube about ssd caching, and I'm wondering if it's possible in proxmox. In my case, I have 7 600gb HDDs set up in a raid 6 array, and would like to set up a 1tb ssd for cache.
zpool add your pool cache /dev/sdxxxx
He will most certainly not get support for bcachefs even if he pays for it.
I wasn't aware of that. I meant 'support' as in 'help'. What word would describe it better?
I just used the word like it is used on the english proxmox ve homepage.
Again, I meant the official support. I always look through the enterprise class glasses.Well, strictly speaking that is also not true (in the "help" sense) as e.g. I would be happy to help here with bcache (not fs).
Yes, nomenclature again. I can see your point, yet I would argue that the no-subscription repo is more stable than testing in the Debian sense of the difference. I think they have internally another repository that is actually the "real" unstable.Then again, maybe it's just me, right? I don't like other words used there too. E.g. "no-subscription" repo which does not communicate well that it's actually "testing" while the so-called testing repo should have been unstable instead.
ZFS L2ARC is not going to be a huge help. That's my experience and and others reported the same.If you RAID is on zfs zpool, you can add disk cache z your zpool
ZFS L2ARC is not going to be a huge help. That's my experience and and others reported the same.
The best performance gain with a mix of HDDs and SSDs is to use the SSDs as a special device and put the metdata on there and control with the dataset property with special_small_blocks, which block you would also get like to be on the SSDs.
Then another device (very fast IOPS) as a SLOG device, e.g. 16 GB Intel optane.
Use the same redundancy for the special devices as for the data devices, this is technically a RAID0-like setup. If you loose the special device, everything will be gone.
If I had the option of putting in just one SSD, I would do it as a cache and not think too much about it. I don't see any particular disadvantages of this solution.ZFS L2ARC is not going to be a huge help. That's my experience and and others reported the same.
If I had the option of putting in just one SSD, I would do it as a cache and not think too much about it. I don't see any particular disadvantages of this solution.
My NVMe 16 GB Intel Optane costs no 30 euros and is perfectly fast:Optanes are EOL and the cost was always such that I wondered If may have as well had the pool be SSD only instead. Again I would consider this only in a mirror.
min/avg/max/mdev = 59.5 us / 117.0 us / 226.7 us / 45.6 us
It wasn't worth it. Compared to bcachefs it was almost not noticeable.I am a bit surprised on this one. I can only imagine this would be because you have lots of random writes going on at all times.
My NVMe 16 GB Intel Optane costs no 30 euros and is perfectly fast:
Code:min/avg/max/mdev = 59.5 us / 117.0 us / 226.7 us / 45.6 us
SLOG will improve the (sync write) performance of a disk pool significantly and a small 30 Euro Optane is a nobrainer.Optaneiswas of course very fast, but it's notsoldmade anymore ...
EDIT: Wait a minute, how is that 16G helpful for the OP?
Ok, I will just break it down - and see where we disagree:SLOG will improve the (sync write) performance of a disk pool significantly and a small 30 Euro Optane is a nobrainer.
That depends on your workload, yet I can say that working with a harddisk pool feels much, much faster with an SLOG and special metadata. L2ARC was not noticeable in our 100 TB pool and we decomissioned it in favor our slog/special device which wasn't available at the time we build this array.But is there really that many random writes?
Of course it's not special nowadays for enterprise U.2 drives, yet not for the price point. If you're running a harddisk pool, you may not have the bucks for fast enterprise NVMe. If you have ... go with it and use two of them for slog and special device (partitioned one). sizing the slog depends on the sequential write performance * 5 seconds (default flush time), more is never used unless you change settings.PCIe 3.0 x2, but the max seq r/w 900 / 145 MB/s, the IOPS looks nice, but nothing special nowadays.
It's the IOPS you're after ... and in ZFS, sequential is out of the picture in a fragmented disk pool pool.But the product has sequential write less than a modern 7K HDD, so it would be just for the IOPS. But is there really that many random writes?
In my understanding there might be up to three TXGs active - and "active" for me means they occupy the storage and/or(?) Ram for the data they are handling in that moment.sizing the slog depends on the sequential write performance * 5 seconds (default flush time), more is never used
It's the IOPS you're after ... and in ZFS, sequential is out of the picture in a fragmented disk pool pool.
I know you have much more experience with ZFS than I do, but let me a bit picky (or wrong?):
In my understanding there might be up to three TXGs active - and "active" for me means they occupy the storage and/or(?) Ram for the data they are handling in that moment.