How to manage storage?

mikk00

New Member
Mar 20, 2023
3
0
1
Hi, I am new and wanted some ideas and advice on how to configure Proxmox as efficiently as possible.
I have 3 8TB HHDs for smb and nextcloud (I plan to do RAIDZ-1 since I also have a data backup elsewhere) + 1 SSD M2 that will act as cache.
As for installing Proxmox + VM storage + LXC do I use two SSDs in ZFS RAID1?
What are your thoughts on this?

SSD and HHD are all born for NAS use but RAM is not ECC.

Any ideas?
 
Everything sounds pretty reasonable, to be honest.

I have 3 8TB HHDs for smb and nextcloud (I plan to do RAIDZ-1 since I also have a data backup elsewhere) + 1 SSD M2 that will act as cache.
Are you really sure you need that NVME for caching though? I assume you mean a L2ARC and/or a SLOG? In most cases you don't really need a L2ARC. However, a SLOG of, let's say, just 8 GB might speed things on your SMB / Nextcloud storage.

As for installing Proxmox + VM storage + LXC do I use two SSDs in ZFS RAID1?
I personally would use that, yes. If your two SSDs happen to be NVMEs you could mirror those (put them into RAID1) with ZFS, then you could put a zvol of a couple gigabytes on that mirror and use that as a SLOG for your other pool. That way your SLOG is redundant as well (but still not protected from power outages). If you really need an L2ARC, you can also put that on a zvol on your SSD mirror.

EDIT: See my post below - vdev on zvol is probably a bad idea. My mistake!

SSD and HHD are all born for NAS use but RAM is not ECC.
I would highly suggest using ECC RAM, but if that's not possible, do a proper prolonged memtest (it's on the PVE ISO already for your convenience ;)) to really make sure your RAM is fine. ZFS can protect you against bit rot, drive failures and all that jazz, but not against faulty RAM.

I already had posted elsewhere about my experience with faulty RAM; to sum it up: I've got a server with 4x 8TB drives at home in RAID-Z1. Felt like adding some more RAM and so I did; then, after a day or two, I started getting random kernel panics related to ZFS and its ARC. I thought it was related to a recent ZFS version upgrade that I had done and had even opened an issue - a ZFS maintainer then asked if my hardware was fine. Did a memtest and found 100+ errors within the first couple minutes. Removed the faulty RAM and started a zpool scrub - ZFS then managed to repair more than 5000+ errors on my zpool and all of my data survived.


Little anecdote aside, always test your RAM! :D I think you'll be fine otherwise.
 
Last edited:
One last note regarding adding a SLOG / L2ARC to zpools in general: You can always add them later if needed, they don't need to be included in the creation of the pool. So you can test your 3x 8TB RAID-Z1 setup without a SLOG first and later add it if it's really necessary. Just make sure those devices are redundant as well - that's why I think it would be best to use zvols on your mirror zpool.

EDIT: My colleague just told me that putting a vdev on a zvol is probably a bad idea, see this thread on Reddit. So, your original idea was better after all ;) But as mentioned before, see if you really need an L2ARC / a SLOG first.
 
Last edited:
Everything sounds pretty reasonable, to be honest.

Are you really sure you need that NVME for caching though? I assume you mean a L2ARC and/or a SLOG? In most cases you don't really need a L2ARC. However, a SLOG of, let's say, just 8 GB might speed things on your SMB / Nextcloud storage.
I am still studying the whole thing a bit but from what I have seen SLOG is the most suitable for my case.
It is a 500GB nvme m2. I plan to connect the machine to a 10GB network and didn't want to have the bottleneck of HHDs in data transfer.
I personally would use that, yes. If your two SSDs happen to be NVMEs you could mirror those (put them into RAID1) with ZFS, then you could put a zvol of a couple gigabytes on that mirror and use that as a SLOG for your other pool. That way your SLOG is redundant as well (but still not protected from power outages). If you really need an L2ARC, you can also put that on a zvol on your SSD mirror.

EDIT: See my post below - vdev on zvol is probably a bad idea. My mistake!
At this point I am thinking of doing a "clean" installation of proxmox on two SSDs with ZFS.
I would highly suggest using ECC RAM, but if that's not possible, do a proper prolonged memtest (it's on the PVE ISO already for your convenience ;)) to really make sure your RAM is fine. ZFS can protect you against bit rot, drive failures and all that jazz, but not against faulty RAM.

I already had posted elsewhere about my experience with faulty RAM; to sum it up: I've got a server with 4x 8TB drives at home in RAID-Z1. Felt like adding some more RAM and so I did; then, after a day or two, I started getting random kernel panics related to ZFS and its ARC. I thought it was related to a recent ZFS version upgrade that I had done and had even opened an issue - a ZFS maintainer then asked if my hardware was fine. Did a memtest and found 100+ errors within the first couple minutes. Removed the faulty RAM and started a zpool scrub - ZFS then managed to repair more than 5000+ errors on my zpool and all of my data survived.


Little anecdote aside, always test your RAM! :D I think you'll be fine otherwise.
This really scares me since the 48GB of ram is from "corsair vengerance" so definitely not high-end.


Thank you for the many tips.
 
Last edited:
I still have the "ram" doubt.
Is there any way to make ZFS use as little ram as possible at the expense of the NVME 500GB SSD?

Also what alerts can I set to prevent various problems?

In short these are my doubts before I start configuring my machine:
- ram issue
- cache management to speed up data writing [Exclusively via SMB ect (VM and container side I should have no problem)]
 
Last edited:
I still have the "ram" doubt.
Is there any way to make ZFS use as little ram as possible at the expense of the NVME 500GB SSD?

Also what alerts can I set to prevent various problems?

In short these are my doubts before I start configuring my machine:
- ram issue
- cache management to speed up data writing [Exclusively via SMB ect (VM and container side I should have no problem)]

ZFS is going to use at most 50% of your total RAM, and that's good. That's what makes it lightning fast. If you memtested your RAM it should (in my opinion) be no problem. If an application needs some RAM that ZFS is using, ZFS will hand it over (and later take it back). You really don't need to worry about that! :D

An L2ARC can even slow your pool down in certain cases, and often times it's a wasted effort to use one, unless you have a specific workload which benefits from having that much extra cache (which is unlikely).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!