[SOLVED] Question about SLOG?

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi,
I was wondering if someone could shed some light on a few questions i have.
Currently have a test lab with 32 gigs of ram 4 disks each disk has 4tb raid 10 zfs.
I wanted to start using a slog so i had 2 spares SSD 240 (i know just for testing im going to get the intel verison ssd)


1) I was reading that L2ARC should not be implemented on proxmox and just the ssd for the ZIL not sure if its true
2) i was reading that the intel optane would be the best option Intel M.2 Optane 4801x and maybe adding this https://www.supermicro.com/products/accessories/addon/AOC-SLG3-2M2.cfm to have dual ssd
and not sure if 32gigs should be enough for the ZIL
https://www.amazon.com/Intel-Optane...e+4801x&qid=1566693334&s=gateway&sr=8-1-fkmr0

With the testing two disks i found around the house which were 240 gigs not sure if what i did was correct i did see a better fsync
created two partitions

Code:
cfdisk /dev/sde
created partition 190G
converted to GPT

type Solaris /usr & Apple ZFS
then created the other partition 5GB and the same procedure from above
then added to the rpool
zpool add rpool cache /dev/sde1 /dev/dev/sdf1
then add the log pool
zpool add rpool log mirror /dev/sde2 /dev/sdf2

and this is the zpool

Code:
root@prometheus4:~# zpool status -v
  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sda3    ONLINE       0     0     0
        sdb3    ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0
    logs
      mirror-2  ONLINE       0     0     0
        sde2    ONLINE       0     0     0
        sdf2    ONLINE       0     0     0
    cache
      sde1      ONLINE       0     0     0
      sdf1      ONLINE       0     0     0

Thank you
 
Reg 1) If your system is already low on RAM adding an L2ARC will reduce the performance even more because the metadata for it needs to be held in RAM. This reduces the amount of RAM available for the ARC even more.
2) 32GiB should be plenty for the ZIL.
 
Thanks for the reply, so duly noted do not add L2ARC only ZIL which is log pool, what is the rule of size adding? do i leave space on the zil or use the whole amount? and for the links above would that ssd be good enough with the pci card to add two ssd?

Thank you
 
Hi,

You must add only SLOG.
The size would be 5 * your hdd pool write speed+(1-2GB). Or as aprox. rule I use 60% RAM

And once / 3-4 month(for zfs 0.7x) I remove SLOG partitions from pool, format with ext4 and run a:

fstrim -v -a ....
Then add back the same partions as SLOG!

Good luck
 
And once / 3-4 month(for zfs 0.7x) I remove SLOG partitions from pool, format with ext4 and run a:

fstrim -v -a ....
Then add back the same partions as SLOG!

If you run Proxmox VE 6 which is using ZFS 0.8.x there is no need for such laborious tasks. zpools now have the autotrim setting which will cause regular trims or you can manually trigger a trim with
Code:
zpool trim <pool>

To enable autotrim run
Code:
zpool set autotrim=on <pool>

The zpool man page has detailed informations about both options.
 
If you run Proxmox VE 6 which is using ZFS 0.8.x there is no need for such laborious tasks. zpools now have the autotrim setting which will cause regular trims or you can manually trigger a trim with


IF and only IF your SSDs/HBA are suported:


zpool status -v
blalaba "(trim unsupported)"

The drives must support both “Data Set Management TRIM supported
(limit 8 blocks)” and “Deterministic read ZEROs after TRIM” in their
ATA options.




Good luck!
 
Last edited:
Thanks for the reply so if i would have 7tb zfs storage the SLOG should be 7gigs +1 or 2 gigs more total 10gigs aprox.

And once / 3-4 month(for zfs 0.7x) I remove SLOG partitions from pool, format with ext4 and run a
well currnelty running proxmox 5.4 but this only applies for SLOG? and i would format it ext4 but would i partition back Solaris /usr & Apple ZFS and GPT? o just format ext4 with what format?


Thank you
 
Thanks for the reply, so after i remove the slog add this is what i did

Code:
fdisk /dev/sdf
then Typed D to delete all partition
then typed N to create partition table
fdisk /dev/sde
then Typed D to delete all partition
then typed N to create partition table
mkfs.ext4 /dev/sdf1
mkfs.ext4 /dev/sde1
then try to add to the pool
zpool add rpool log mirror /dev/sde1 /dev/sdf1
but go this error
/dev/sde1 contains a filesystem of type 'ext4'
/dev/sdf1 contains a filesystem of type 'ext4'
so i ran
 zpool add rpool -f log mirror /dev/sde1 /dev/sdf1

and not sure if this is the correct way
 
Hi,

... and one suggestion: SLOG is it read only if the server is broken, and after is online, the data from SLOG it is write back to the pool.
So in my oppinion, it is a acceptable risk, if instead of mirror SLOG, to use only one partition(let say sde1), and from time to time to change sde1 with sdf1.
In this case your SSD will have a long life!

Good luck
 
Hi @killmasta93 ,

It is OK! The best is to see your output:

zpool status -v

... and if you do not see any slog, then you are OK!

Good luck!

Thanks for the reply so i removed the SLOG, after that i deleted all the partitions of the disk and what i just did was running this command


Code:
zpool add rpool log mirror /dev/sde /dev/sdf

then my pool ended up like this

Code:
root@prometheus4:~# zpool status -v
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda3    ONLINE       0     0     0
            sdb3    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
        logs
          mirror-2  ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0

so my question is the format of the disk does not matter and does not matter either way to have a partition?

Thank you
 
just quick question i been trying to bench test the before and after with FIO but i swear could not get the hang of it. by any chance do you have something that could bench test with ZFS?
Thank you again
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!