Hi all,
I have created two zfs pools with the simplest command: zpool create <pool> <device> at the begining.
It seems have no log (aka zil or slog) and cache. I believe I did turn on lz4 compression for all three.
My question is: in my case, do I need log or cache for each pool?
zpool status returns:
rpool was created by the proxmox install
- two mirrored 1TB Samsung 860EVO
nvmepool was created by simple zpool create
- raid-z2 of six Western Digital SN750 2TB
hddpool was created by simple zpool create
- mirrored WD DC530 12TB x 2
Other specs are:
Threadripper 3970x
256GB DDR4 3200MHz Corsair LPX
This is a for some scientific research computation (not super mission critical) but as an improvement over a single DELL workstation, which allows multiple lab members to use their own VMs and GPU passthrough for machine learning and other data processing.
I have created two zfs pools with the simplest command: zpool create <pool> <device> at the begining.
It seems have no log (aka zil or slog) and cache. I believe I did turn on lz4 compression for all three.
My question is: in my case, do I need log or cache for each pool?
zpool status returns:
Code:
root@mars:~# zpool status
pool: hddpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:30:04 with 0 errors on Sun Nov 8 00:54:05 2020
config:
NAME STATE READ WRITE CKSUM
hddpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-HGST_HUH721212ALE604_XXXXXXXX ONLINE 0 0 0
ata-HGST_HUH721212ALE604_XXXXXXXX ONLINE 0 0 0
errors: No known data errors
pool: nvmepool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:10:42 with 0 errors on Sun Nov 8 00:34:45 2020
config:
NAME STATE READ WRITE CKSUM
nvmepool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
nvme-eui. ONLINE 0 0 0
nvme-eui. ONLINE 0 0 0
nvme-eui. ONLINE 0 0 0
nvme-eui. ONLINE 0 0 0
nvme-eui. ONLINE 0 0 0
nvme-eui. ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:00:07 with 0 errors on Sun Nov 8 00:24:11 2020
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-Samsung_SSD_860_EVO_1TB-part3 ONLINE 0 0 0
ata-Samsung_SSD_860_EVO_1TB-part3 ONLINE 0 0 0
errors: No known data errors
rpool was created by the proxmox install
- two mirrored 1TB Samsung 860EVO
nvmepool was created by simple zpool create
- raid-z2 of six Western Digital SN750 2TB
hddpool was created by simple zpool create
- mirrored WD DC530 12TB x 2
Other specs are:
Threadripper 3970x
256GB DDR4 3200MHz Corsair LPX
This is a for some scientific research computation (not super mission critical) but as an improvement over a single DELL workstation, which allows multiple lab members to use their own VMs and GPU passthrough for machine learning and other data processing.
Last edited: