Help with new setup (ZFS) , CPU/Memory Allocation

dva411

New Member
Feb 13, 2024
25
0
1
I'm brand new to Proxmox. I just got a Nuc (today) with an i5-13500H CPU, 32gb ram ddr5 ram, 2X 2tb NVME. 2X2.5gb Nics. I will be running 3-4 VMs.
  1. opnSense (or Sophos) and Adguard Home
  2. Docker containers with the ARR stack, maybe Plex in docker if it will work with hardware acceleration, various other server containers
  3. Home Assistant
  4. If I havent bogged down my server, I'd like to play around with a windows VM (see what I can do with Blue Iris?). Experimental at this point.
I'd like to use ZFS (for mirroring and snapshots). I currently use a pi 4b for number 2 above. I'm writing downloads to an NVME and then moving them to spinning 18tb drive. I'd like to do something similar on the Nuc.
  1. Do you forsee any performance issues with using ZFS given the ram to NVME ratio and VM useage pattern?
  2. Can I/Should I use the same NVME drives for downloads as I am for my root/and VMs (trying to save some money, and I only have the 2 NVME slots. I could only use a USB adapter for more space if that will be efficent and help). If I can use the 2x2gb NVMEs, I want to set it up so the downloads arent mirrored (dont want to add increased wear on the drives for something that is just temporary. If possible, how can this be done? My thought was in initial setup of Proxmox to not allocate 500gb to proxmox, leaving approximately 1.3gb of free space for proxmox root and VMs. Is that how it works? Does it partition my hard drive? Is that going to add complexity/undo maintenance headaches? Is there a different way to accomplish not mirroring a portion of those drives?
  3. Any thoughts on CPU and initial ram allocation for the VMs listed above. I'm most concerned about optimizing items 1-3 in my VM list. If I cant make 4 work, not an issue.
That's enough for this thread will post another regarding networks,

Thanks!
 
Do you forsee any performance issues with using ZFS given the ram to NVME ratio and VM useage pattern?
You should use proper SSDs (Enteprise grade, no QLC, with power-loss Protection) with ZFS or otherwise the performnce and life expectation might not be great. RAM size shouldn't be the problem and will work fine with 2-4GB for the ARC.

Can I/Should I use the same NVME drives for downloads as I am for my root/and VMs (trying to save some money, and I only have the 2 NVME slots. I could only use a USB adapter for more space if that will be efficent and help). If I can use the 2x2gb NVMEs, I want to set it up so the downloads arent mirrored (dont want to add increased wear on the drives for something that is just temporary. If possible, how can this be done? My thought was in initial setup of Proxmox to not allocate 500gb to proxmox, leaving approximately 1.3gb of free space for proxmox root and VMs. Is that how it works?
Yes, thats possible.
Does it partition my hard drive?
You would need to manually partition, format and mount it later via CLI if you want some other non-mirrored filesystem for your downloads.
Is that going to add complexity/undo maintenance headaches?
Did something similar. Its working ok.
Is there a different way to accomplish not mirroring a portion of those drives?
No.
Any thoughts on CPU and initial ram allocation for the VMs listed above. I'm most concerned about optimizing items 1-3 in my VM list. If I cant make 4 work, not an issue.
RAM is whats usually runs out first as you can't overprovision it. With 2GB RAM for PVE, 2-4GB for ZFSs ARC you probably don't want to allocate more than 24GB to guests. OPNsense should work fine with 2GB RAM unless you want to use IPS. Homeassistant 2GB too. For windows I wouldn`t go below 8GB. So 12GB for your Adguard + plex + arr stack. So 32GB might be enough.
 
Last edited:
Thanks Dunuin.

I'm thinking about it more, i probably should reserve half the disk for backups, and DLs. To make this happen do I partition and format the drives in advance of the installation? Since both ZFS disk volumes have to be the same size, do I create two partitions on each drive (1 for ZFS @ 1.3gb and 1 for backups and downloads @ 500gb). That means the backups and downloads could be on different disks. Would you put the downloads on the primary and any snapshots on the mirror (and will that be easy to tell which is which?) When I launch setup, does it ask me which partions I want to use for the install (or does it only show me the disks? What am I going to need to do to have my docker containers access the non-zfs partitions (eg ARR stack saving temp downloads to downloads folder)

FYI I'm going to post another thread on some general network questions with regards on how to optimize based on my hardware and run opnsense for NGFW.
 
Last edited:
I'm thinking about it more, i probably should reserve half the disk for backups, and DLs.
You shouldn't store your backups on the same disk as your data you want to backup. Get a dedicated disk for that or even better a dedicated PBS host.

To make this happen do I partition and format the drives in advance of the installation?
Yes, see "hdsize" in paragraph "Advanced ZFS configuration options": https://pve.proxmox.com/wiki/Installation


Since both ZFS disk volumes have to be the same size, do I create two partitions on each drive (1 for ZFS @ 1.3gb and 1 for backups and downloads @ 500gb). That means the backups and downloads could be on different disks. Would you put the downloads on the primary and any snapshots on the mirror (and will that be easy to tell which is which?)
You can't store snapahots on another partition. They are always part of the storage that is doing the snapshot.


When I launch setup, does it ask me which partions I want to use for the install (or does it only show me the disks? What am I going to need to do to have my docker containers access the non-zfs partitions (eg ARR stack saving temp downloads to downloads folder)
It only lets you choose whole disks which then will be wiped. Use hdsize to tell it to leave some space unallocated which you then later could manually partition via CLI after the install.
 
You shouldn't store your backups on the same disk as your data you want to backup. Get a dedicated disk for that or even better a dedicated PBS host.


Yes, see "hdsize" in paragraph "Advanced ZFS configuration options": https://pve.proxmox.com/wiki/Installation



You can't store snapahots on another partition. They are always part of the storage that is doing the snapshot.



It only lets you choose whole disks which then will be wiped. Use hdsize to tell it to leave some space unallocated which you then later could manually partition via CLI after the install.
Thanks. Will it be easy to mount to thos partions from inside containers/VMs? I assume Ext4 would be the way to go?
 
Thanks. Will it be easy to mount to thos partions from inside containers/VMs? I assume Ext4 would be the way to go?
You usually don't want to mount partitions inside VMs. This would be possible with disk passthrough in case only a single VM needs to access that partition (as mounting a partition in multiple VMs would corrupt it) but then it would be better to have a LVM-Thin pool and store virtual disks on it.
In case multiple VMs should access the same files (and you probably want that with your ar stack+plex+torrent) you would need to work with SMB/NFS shares. And PVE is not a NAS and won't offer sharing files via SMB/NFS out of the box. So you probably want to use that partition with some kind of NAS VM/LXC and then let that NAS OS share your downloads to other VMs.
 
Last edited:
Ok... I was using a SMB mount on my raspberry pi to store my library (moved the files over to the share after download completed). In that case, I mounted the SMB share in linux, and created docker volumes to point to that mount. Is the concept similar with Proxmox? I mount at an OS level, and then I can have volumes reference it? Do you think there is a lot of overhead on a SMF/NFS share (for downloads)? I suppose there arent really other options though. Even if I used a USB NVME, I'd be in the same spot, I can only pass it through to one container. Correct? Thanks again... I'm learning.....
 
on that point re: a USB NVME. If it were you, would you use that and keep my internal NVME just for proxmox. Max it out to 1.8gb mirrored? I was only going to partition it, because I thought it would speed up the downloads having fast PCIe storeage. I have a USB NVME gen 3 that I could use. I could spin up a NAS container and have it use the USB storage, if in the end the overhead will make the PCIe advantage moot.
 
Last edited:
Even if I used a USB NVME, I'd be in the same spot, I can only pass it through to one container. Correct? Thanks again
VMs: can only using SMB/NFS shares
Unprivileged LXC: can only use bind-mounts
Privileged LXC: can use SMB/NFS shares + bind-mounts
Docker Containers: PVE insn't supporting docker containers out of the box and you usually run them virtualized inside a VM or LXC (inside a VM is the recommended way)

Do you think there is a lot of overhead on a SMF/NFS share (for downloads)? I suppose there arent really other options though.
Yes that adds a lot of overhead. But especially for IOPS performance and not when sequentially reading/writing some big files like movies.

on that point re: a USB NVME. If it were you, would you use that and keep my internal NVME just for proxmox. Max it out to 1.8gb mirrored? I was only going to partition it, because I thought it would speed up the downloads having fast PCIe storeage. I have a USB NVME gen 3 that I could use. I could spin up a NAS container and have it use the USB storage, if in the end the overhead will make the PCIe advantage moot.
I don't know what internet connection you got, but usually using a NVMe SSD won't give you any faster downloads. Even a SATA HDD could write with something like 160MB/s which would be fast enough to fully saturate a 1Gbit Internet connection. Biggest problem with USB storage is that those are usually not as reliable as internal connections like SATA/SAS/NVMe.
 
Got it. Thanks. I think I'll just use the external drive as a smb share for storage/downloads and create a ZFS raid 1 with my entire internal drives. That will keep it simple. The "external" is actually an internal NVME in a USB enclosure. All i'm doing is writing to it and moving/erasing. I'll keep that off my internal drives. Yes, I was planning on running docker inside a debian VM. I've read some conflicting things regarding VMs and LXCs. My basic understanding is the LXCs share the same kernal and use less resources. BUT....... I have an intel 13th gen i5. It has like 6 performance cores and 6 efficiency cores. In the video, he indicated that VMs were working well, but LXCs, at times, were getting some strange allocations of CPUs (eg... heavy processes getting all efficiency cores). Since I'm resource constrained, I'd like to give LXCs a chance but I'm a little spooked about the CPU allocation. Perhaps thats been adressed (that video is more than a year old). I've also read mixed reviews about docker in LXCs. Some saying it works fine, others saying they've lost their containers. I also read something yesterday that led me to believe that you didn't need to pass the GPU to an LXC go get hardware transcoding(it can be virtualized and shared?)... which has me thinking about using LXC for Plex. I'd like to combine it with docker if possible, as it just keeps maintenance down with watchtower keeping everything up to date. It makes things very easy. Do you have any thoughts? If it were you would you give LXC/Docker a try for my ARR stack and Plex, or would you take that straight to a VM because its not worth the hassle?
 
Yes, LXCs aren't virtualized and share the hardware and kernel with the PVE host and only the Linux userland is different. As LXCs and host are basically the same and not fully isolated you get less overhead, can share hardware like the GPU and folders without SMB/NFS via bind-mounts.
But as they aren't virtualized nor fully isolated they are also less secure (I personally wouldn't use them for anything that you want to make public...so no plex or homeassistant, that you might want to access from on the go or share with friends...) and easier to break because of more dependencies with the host and more privilege complications.
 
I created my first PVE instance. In the end I decided to leave some space on my internal NVME, as I figured I might use it for something in the future, and its probably easier to grow ZFS than shrink it, if i determine the extra space isn't needed.

I had 2X2TB internal NVME. The install informed me that my Raid-1 would consume 1.83TB if I used the whole disk. I reduced it down to 1.5GB. After the install it appears that I have 1.61TB. I also have a partition 2 for EFI that is 1.07GB. I used fdisk to create a partition 4 for the remaining space. When I add the partitions it seems to be about equal to total disk size of 2.0TB, which is greater than I thought I could provision for storage (1.83tb)

1) Why is ZFS 1.61 vs 1.5TB?
2) I didnt' think I could save up to full capacity on any disk. I'm surprised I have that much free space (expected it to be 1.83TB vs 2.0tb)
3) What is partition 2 supposed to be used for? Is that the same "local (vmhost) directory under storage?. I just downloaded an ISO that I'm going to use to create OPENMEDIAVAULT container into local. Is that not a good idea?
4 What is local (vmhost) supposed to be used for?
5) How do I get partition 4 to be usable? Is that where I should be downloading ISOs? Where do I set the usage restrictions, and are there steps I need to do first? I assume I need to do something extra before I use it in the OpenMediaVault as an NFS or SMB share. Will it show up eventually as an entity under my host?
6) Do I need to do anything with pools, or does zfs just handle everything for containers, etc.
7) I assume that I can just partition disk 2, just like I did partition 1. In the guide it indicated that I could only do it to disk 1 (which I must have misunderstood, because the mirrors are the same size, and I must have the same available space over there.
8) Is there anything else I need to consider, change, (cache, swap, etc) or would you roll with this...


Screenshot 2024-02-14 5.40.50 PM.png
Screenshot 2024-02-14 5.39.25 PM.png
 
Last edited:
1) Why is ZFS 1.61 vs 1.5TB?
Are you sure both are TB? If you read "T" this could stand for "TiB" or "TB" which are both different things. 1,5 TiB = 1.6493 TB.
What is partition 2 supposed to be used for. I just downloaded an ISO that I'm going to use to create OPENMEDIAVAULT container into partition 2. Is that not a good idea?
I guess you mean partition 4? Partition 2 is the ESP for booting.

3) How do I get partition 4 to be usable? Is that where I should be downloading ISOs? Where do I set the usage restrictions, and are there steps I need to do first? I assume I need to do something extra before I use it in the OpenMediaVault as an NFS or SMB share.
You would need to format it with a filesystem and mount that filesystem. All done via CLI.
Or alternatively, in case you want to use LVM-Thin or ZFS to put your virtual disks on it, you would need to manually create a ZFS pool or LVM+VG+thinpool via CLI.

4) Do I need to do anything with pools, or does zfs just handle everything for containers, etc.
PVE is using the ZFS defaults. Its always a good idea to optimize ZFS for your pool layout and workload.
5) I assume that I can just partition disk 2, just like I did partition 1. In the guide it indicated that I could only do it to disk 1 (which I must have misunderstood, because the mirrors are the same size, and I must have the same available space over there.
Yes, you should have the same amount of unallocated sectors on the other disk too that would be available to be partitioned.
 
You are fast! I think I was still editing my questions and I got answers back!..... Thanks again for all your help. You answered many of my questions... But still a little confused about local, vs local-zfs. I downloaded that iso into local (because it was the only place i could put an iso. Is that partition 2, or something different?

1) Why is ZFS 1.61 vs 1.5TB? (You answered TiB vs TB)

2) I didnt' think I could save up to full capacity on any disk. I'm surprised I have that much free space (expected it to be 1.83TB vs 2.0tb)
You ANSWERED Ok... i'll stop worrying about that

3) What is partition 2 supposed to be used for? Is that the same "local (vmhost) directory under storage?. I just downloaded an ISO that I'm going to use to create OPENMEDIAVAULT container into local. Is that not a good idea?


4 What is local (vmhost) supposed to be used for?


5) How do I get partition 4 to be usable? Is that where I should be downloading ISOs? Where do I set the usage restrictions, and are there steps I need to do first? I assume I need to do something extra before I use it in the OpenMediaVault as an NFS or SMB share. Will it show up eventually as an entity under my host? You answered. I will need to format it. When you said put my virtual disks on it, were you referring to using it for containers or VMs? Wouldn't i just use my existing zfs partition. Or did you mean something else? I was thinking about ext-4. and then putting it into a share.


6) Do I need to do anything with pools, or does zfs just handle everything for containers, etc. You answered


7) I assume that I can just partition disk 2, just like I did partition 1. In the guide it indicated that I could only do it to disk 1 (which I must have misunderstood, because the mirrors are the same size, and I must have the same available space over there.
8) Is there anything else I need to consider, change, (cache, swap, etc) or would you roll with this... You Answered
 
But still a little confused about local, vs local-zfs. I downloaded that iso into local (because it was the only place i could put an iso. Is that partition 2, or something different?
"local" is a directory type storage pointing to /var/lib/vz on your root filesystem (which is the ZFS dataset "rpool/ROOT/pve-1").
"local-zfs" is a ZFSpool type storage pointing to the ZFS dataset "rpool/data".
So both storages are part of the mirrored 3rd partitions.

1) Why is ZFS 1.61 vs 1.5TB? (You answered TiB vs TB)
Where did you get those numbers? I prefer to get usage via zpool list -v and zfs list -o space.

2) I didnt' think I could save up to full capacity on any disk. I'm surprised I have that much free space (expected it to be 1.83TB vs 2.0tb)
You ANSWERED Ok... i'll stop worrying about that
Also keep in mind that you shouldn't fill a ZFS pool more than 80%. So in case ZFS reports that you got 1.5TB available, you shouldn't fill it more than 1.2TB.

4 What is local (vmhost) supposed to be used for?
"local-zfs" is for virtual disks. "local" is for everything else (ISOs, templates, snippets, ...).

When you said put my virtual disks on it, were you referring to using it for containers or VMs? Wouldn't i just use my existing zfs partition. Or did you mean something else? I was thinking about ext-4. and then putting it into a share.
Doesn't matter if VM or LXC. The question is how to get it into a VM or LXC. A VM is isolated and can't access your ext4 on that 4th partition unless you use disk passthrough. If you want to use that partition in a OMV VM to share it, I would use that partition as a storage for virtual disks and give that VM a virtual disk on that storage. You could then let OMV format that virtual disks with ext4 or whatever.
Formating that partition with ext4 would only make sense in case you want to bind-mount its contents into a LXC.

Wouldn't i just use my existing zfs partition.
Yes, normally you would simply use the whole disks for your ZFS pool and then put those virtual disks on it by putting the virtual disks on the "local-zfs" storage. But the whole point not using the full disk for ZFS was that you don't want to waste space and reduce wear by mirroring your downloads. If you would put that OMV virtual disk on "local-zfs" then its mirrored. So you would need to create an additional storage, similar to "local-zfs", but not mirrored. For example by creating two LVM-Thin storages or two ZFS pool storages on disk1 partition 4 + disk 2 partition 4.
 
Last edited:
Hey Dunuin,

I had everyting up and running well. However I was having some issues with GPU pass through. I decided to do a fresh install. I backed up my VMs. After the install, my backup drive (LVM-Thin) was recognized but my backups werent showing up under backups. When I looked further I was showing SDA but no partition. In screwing around with it, I think I made a fatal mistake. I went to the node/disks in the GUI, and under thin pool, i removed the pool. I suppose there is no way I can it to those backups now?
 
I went to the node/disks in the GUI, and under thin pool, i removed the pool.
You could check vgs and lvs. But if you really clicked "More -> Destroy" at "Node -> Disks -> LVM-Thin" this doesn't look good.
 
Both lvs -a and vgs -a return nothing.
1. Is that what you wanted me to check? Is that normal (for both of those commands to return nothing) if currently my node only has a single zfs raid 1 pool?
2 Im still trying to wrap my head around everything. Just so my understanding is correct... My files are still on the disk, but I destroyed the logical volume, so I have no pointers to the blocks to be able to retrieve the files.... Is that close?
3. Going forward, if i have backups on a drive that is LVM-thin (created on a previous instance of proxmox)... and i reinstall proxmox. How do i go about restoring from the backups? It showed under thin pool, but the disk didnt show a partition and my files werent showing up under backups.
 
1. Is that what you wanted me to check? Is that normal (for both of those commands to return nothing) if currently my node only has a single zfs raid 1 pool?
But you said your backups were on LVM-Thin pool that you deleted.

2 Im still trying to wrap my head around everything. Just so my understanding is correct... My files are still on the disk, but I destroyed the logical volume, so I have no pointers to the blocks to be able to retrieve the files.... Is that close?
Yes, most likely. Your best bet then would be some data rescue suites or companies...or setting every up from scratch again in case that might be less work.

Going forward, if i have backups on a drive that is LVM-thin (created on a previous instance of proxmox)... and i reinstall proxmox. How do i go about restoring from the backups? It showed under thin pool, but the disk didnt show a partition and my files werent showing up under backups.
LVM-Thin is block based storage. You can only store backups on a filesystem based storage. Without manually creating a thin volume and mounting it as a directory storage via CLI you shouldn't be able to store backup on a thin pool...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!