ssd how big?

chalan

Member
Mar 16, 2015
119
3
16
hello, i have 2x4TB WD Gold in zfs raid1, 64GB RAM and want to buy SSD for log and cache. How big should it be? must be the same capacity as rpool? (4TB)?
 
No, for log usually 8 GByte are enough, the SSD only has to store only the sync data written in a sync interval. The Read Cache can be any size you like, its effectiveness depends on your working set, but it should be at least same size as ARC.

Some more hints:

If the ZIL device (log) is not mirrored you can run into trouble during replacement of a defective ZIL (some ZFS implementations crash during removal). ZIL is also only needed for sync access. ZIL has only write traffic as long as you have no power outage.

L2ARC (cache) can not be mirrored and there is of course no need for it.

With Solaris it was not possible to use same SSD for ZIL and L2ARC, with Linux it should be possible to use partitions.

If you plan a SSD same size as the rpool, why not using the SSD and drop the HD's?
 
i dont plan to buy such big ssd, i was just asking... so i have to buy 2xssd and put it in zfs raid1 for log for security? but you said that l2arc cannot be mirrored so iam confused :( or sould i use the ssd only for l2arc?

and how can i find out my ARC size?

my goal is to improve performance of my disks... fsync is poor... 2x4TB WD GOLD

Code:
root@pve-klenova:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME                              STATE     READ WRITE CKSUM
    rpool                             ONLINE       0     0     0
      mirror-0                        ONLINE       0     0     0
        wwn-0x5000cca25cc933fe-part2  ONLINE       0     0     0
        wwn-0x5000cca269c4bd82-part2  ONLINE       0     0     0

errors: No known data errors
root@pve-klenova:~# pveperf
CPU BOGOMIPS:      38401.52
REGEX/SECOND:      449511
HD SIZE:           2612.27 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     111.09
DNS EXT:           56.77 ms
DNS INT:           20.47 ms (elsonet.sk)

if i do directly on pve

root@pve-klenova:~# dd if=/dev/zero of=/temp.raw bs=1024k count=8192 conv=fdatasync
8192+0 records in
8192+0 records out
8589934592 bytes (8,6 GB, 8,0 GiB) copied, 11,7641 s, 730 MB/s

but in vm

root@merkur:~# dd if=/dev/zero of=/temp.raw bs=1024k count=4096 conv=fdatasync && rm /temp.raw
4096+0 records in
4096+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 45,7828 s, 93,8 MB/s

root@merkur:~# dd if=/dev/zero of=/home/temp.raw bs=1024k count=4096 conv=fdatasync && rm /home/temp.raw
4096+0 records in
4096+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 150,009 s, 28,6 MB/s

/home is zvol 2TB
/is qcow2 10GB

both same zpool... im confused :(
 
Last edited:
you can run with just one ZIL device, just a warning that on failure of the ZIL device you probably cannot replace online.

Arc size: "arcstat"

For VM disk performance: which driver did you use? for best speed virtio-scsi or virtio-blk (prefer virtio-scsi as it supports scsi unmap)
 
root@pve-klenova:~# arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
21:42:44 0 0 0 0 0 0 0 0 0 2.0G 2.0G

hdd driver test inside this VM

root@merkur:~# uname -a
Linux merkur 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10 18:29:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

root@merkur:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial

scsi2, local, qcow2, cache=writethrough, size=10G

write:
root@merkur:~# dd if=/dev/zero of=/mnt/sdc/temp.raw bs=1024k count=4096 && rm /mnt/sdc/temp.raw
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 39.9517 s, 108 MB/s

read:
/dev/sdc:
Timing cached reads: 3232 MB in 2.00 seconds = 1616.37 MB/sec
Timing buffered disk reads: 2228 MB in 3.00 seconds = 742.12 MB/sec

scsi3, local-zfs, zvol, cache=writethrough, size=10G

write:
root@merkur:~# dd if=/dev/zero of=/mnt/sdd/temp.raw bs=1024k count=4096 && rm /mnt/sdd/temp.raw
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 95.2488 s, 45.1 MB/s

read:
/dev/sdd:
Timing cached reads: 3160 MB in 2.00 seconds = 1579.77 MB/sec
Timing buffered disk reads: 1724 MB in 3.00 seconds = 574.15 MB/sec

virtio3, qcow2, cache=writethrough, size=10G

write:
root@merkur:~# dd if=/dev/zero of=/mnt/vdb/temp.raw bs=1024k count=4096 && rm /mnt/vdb/temp.raw
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 11.2733 s, 381 MB/s

read:
/dev/vdb:
Timing cached reads: 3176 MB in 2.00 seconds = 1588.04 MB/sec
Timing buffered disk reads: 2364 MB in 3.00 seconds = 787.76 MB/sec

virtio4, local-zfs, zvol, cache=writethrough, size=10G

write:
root@merkur:~# dd if=/dev/zero of=/mnt/vdc/temp.raw bs=1024k count=4096 && rm /mnt/vdc/temp.raw
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 54.6697 s, 78.6 MB/s

read:
/dev/vdc:
Timing cached reads: 3136 MB in 2.00 seconds = 1567.85 MB/sec
Timing buffered disk reads: 1620 MB in 3.00 seconds = 539.48 MB/sec

so which driver i have to use and local qcow2 or local-zfs zvol? are the write speeds 381 MB/s with qcow2 and virtio driver, realy HDD speeds or some kind of cache?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!