Playing with ZFS

cglmicro

Member
Oct 12, 2020
101
11
23
51
Hi guys.

I'm trying to play a bit with ZFS following a recommandation (thank you Dunuin). I have an OVH server, no HW RAID card, two NVME disks of 1Tb (nvme0n1 and nvme1n1).

I've uploaded the ISO, started a fresh install from the latest PVE ISO, selected ZFS MIRROR for my disks and was able to log in the GUI.

At that point I saw two disks under my server:
1659993374811.png
Both disks were 1Tb, but in DISKS > ZFS I was seeing my RPOOL that contain both disks. Is it possible that both disks represent the same RPOOL of disks?

After that I joined my cluster and I only see 1 LOCAL disk from now on:
1659993548977.png
1659993619882.png

My questions are:
- Is my only LOCAL storage I see is in fact my ZFS RAID1?
- Am I safe if either drive crash, is my PVE can boot on both drive in case of a failure?
- Do you need some other informations to tell me if my setup is optimal?

Thank you.
 
Hi guys.

I'm trying to play a bit with ZFS following a recommandation (thank you Dunuin). I have an OVH server, no HW RAID card, two NVME disks of 1Tb (nvme0n1 and nvme1n1).

I've uploaded the ISO, started a fresh install from the latest PVE ISO, selected ZFS MIRROR for my disks and was able to log in the GUI.

At that point I saw two disks under my server:
View attachment 39801
Both disks were 1Tb, but in DISKS > ZFS I was seeing my RPOOL that contain both disks. Is it possible that both disks represent the same RPOOL of disks?
Its not two disks, its two storages. "local" is a directory storage which is your root filesystem and the only place where oyu can store files/folders like ISOs, templates, backups and so on.
Then you got the "local-zfs" which is the storage for your VMs/LXCs virtual disks.
Both storages share the same 1TB of your pools capacity.
Also keep in mind that a ZFS should be only filled up to 80%, so both storages together shouldn't exceed 800TB of data.
 
ZFS is a Copy-on-Write filesystem. It always needs some free space to operate optimal. Similar to a SSDs that gets slow too when getting full. After around 80% full it starts getting slower and will fragment faster which is bad as a ZFS pool can't be defragmented. And in case you completely fill your, pool your pool will become unusable with no way to delete anything to get some space again. So you want to make sure that you never ever completely fill it. What I like to do is setting a quota for the whole pool of 90% of the pools capacity and setup some monitoring for example with zabbix. That way you can never completely fill your pool. Then I keep an eye on the pool using zabbix and as soon as I exceed 80% of the pools capacity I will delete stuff to bring it below 80% again.
 
  • Like
Reactions: _Dejan_

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!