Are my partitions logical and correct?

Nov 27, 2023
162
35
28
Netherlands
daniel-dog.com
Hello everone,

I am trying some things out and making my last plans before deploying my server into production.
I have 4x 2TB SSDs in my system and was thinking the following disk configuration:

Create a ZFS RAID 1 for Proxmox VE 8 and tell the installer to only use 500GB of the disks. (leaving 1500GB unallocated)
Then when Proxmox VE 8 is installed, create a ZFS RAID 1 on the unused 2x 2TB disks for the VM disks. (Or should I use ZFS mirror?)
And finally create 2 new partitions on the unallocated spaces of the two boot disks and configure it as 2 directories in Proxmox VE.

The reason I want my backups this way is for two reasons:
First: If something happens to the OS or I need to get the backups off the disks without access to ZFS, I can do this without issue. (Since it is just a ext4 partition)
Second: If for whatever reason the backups take too much space, so that it fills the disk until it is completely full, it will still not cause issues for the host. (As Proxmox VE and backups do not share the same partition and thus backups cannot completely fill the disk to the point that is causes Proxmox VE to run out of disk space.)

And last but not least:
Should I leave 10% of the disks unallocated or will this not work due to ZFS? (Since this is the rule of thumpis to not allowing the disk to completely fill up and leave some reserve for wearout reasons.)
 
First: If something happens to the OS or I need to get the backups off the disks without access to ZFS, I can do this without issue. (Since it is just a ext4 partition)
Where is the problem booting a ZFS capable Live Linux like Ubuntu or even the PVE ISO in rescue mode from a pen drive to access your ZFS pools?
And keep in mind that you not only have to backup the VMs/LXCs from the other pool but also your stuff from the "local" storage as well as your nodes config files. Wouldn't be great to backup the contents of your system disks to your system disks...so you probably want a fifth disk or do the backups to a remote system. Which would be a good idea anyway...have a look at the "3-2-1 backup rule".
Second: If for whatever reason the backups take too much space, so that it fills the disk until it is completely full, it will still not cause issues for the host. (As Proxmox VE and backups do not share the same partition and thus backups cannot completely fill the disk to the point that is causes Proxmox VE to run out of disk space.)
For that ZFS got quotas. Just create a single 2TB pool, create a dataset for your backups, set a 1.5TB quota for that dataset and add it as another ZFS storage to PVE. That way you can't store more than 1.5TB of backups and the root filesystem will have the remaining space left.

Should I leave 10% of the disks unallocated or will this not work due to ZFS? (Since this is the rule of thumpis to not allowing the disk to completely fill up and leave some reserve for wearout reasons.)
You can. But also keep in mind that you shouldn't fill a ZFS pool more than 80%. So in that case 80% of 90% = 72%. As you tell the SSDs with TRIM/discard what sectors are empty and which not, the SSD got these 20% anyways for wearleveling.
 
Last edited:
  • Like
Reactions: Daniel_Dog
Thanks for the feedback.

On my server, I cannot boot my own ISOs directly (since my hoster does not pay for a IPMI license that is required for this functionality) and if something happens to grub then I am not able to boot into rescue mode of Promox VE itself.
In that case I need the boot into the Rescue OS provided by my hoster. (That is still on Debian 10)
And since I am planning getting UEFI and SecureBoot enabled by my hoster, I kinda expect that Rescue OS from them will not boot. (Since I see no other reason to have UEFI and SecureBoot disabled by default.)

And thanks for the ZFS info.
I am still very new to ZFS so I am not (yet) familiar with all of its options.
But your solution seems to be a good way as well and I will definitely look into that as well.

And did I understand you correctly that overpovisioning by 10% is not needed for ZFS since ZFS does it for me?
 
And did I understand you correctly that overpovisioning by 10% is not needed for ZFS since ZFS does it for me?
No, ZFS won't do that for you on its own. If you want 10% always free for wearleveling I would partition the disk so ZFS is using 100% of the capacity.
Then I would set up a 90% quota for that pools root to not allow ZFS to fill it more than 90%. Not even by accident. Then you need to set up some monitoring software like zfs-zed, zabbix or whatever that will send you some notifications in case the pool fills up more than for example 75%, so you can delete stuff or buy more disk before it exceeds 80% where the pool will become slower and starts fragmenting faster (which is bad as you can't defrag it).
And you will have to set up discard/TRIM in every VMs guestOS or otherwise ZFS won't be able to free up space of stuff that you delete from your virtual disks. Without this ZFS also won't report to the SSDs firmware what sectors are actually free so the firmware could use these free sectors for wear-leveling.
 
No, ZFS won't do that for you on its own. If you want 10% always free for wearleveling I would partition the disk so ZFS is using 100% of the capacity.
Then I would set up a 90% quota for that pools root to not allow ZFS to fill it more than 90%. Not even by accident. Then you need to set up some monitoring software like zfs-zed, zabbix or whatever that will send you some notifications in case the pool fills up more than for example 75%, so you can delete stuff or buy more disk before it exceeds 80% where the pool will become slower and starts fragmenting faster (which is bad as you can't defrag it).
And you will have to set up discard/TRIM in every VMs guestOS or otherwise ZFS won't be able to free up space of stuff that you delete from your virtual disks. Without this ZFS also won't report to the SSDs firmware what sectors are actually free so the firmware could use these free sectors for wear-leveling.
Thanks for the follow up.

In that I case I will configure ZFS to use 100% of the disk and just tell it to not exceed 80%. (This should not happen anyway but in my opinion it is never a bad idea to have a fail safe.)

And for the VMs, I (almost) only use templates, and the templates have SSD emulation and Discard turned on.
qemu-guest-agent is auto installed and enabled by cloud-init so everything should play nice with each other.
And if I create a VM myself then I turn those things on when at the storage configuration GUI.
 
In that I case I will configure ZFS to use 100% of the disk and just tell it to not exceed 80%. (This should not happen anyway but in my opinion it is never a bad idea to have a fail safe.)
I still would highly recommend setting up proper monitoring with notifications. In case a disk fails or the pool simply degrades because of too many error you want to know that to be able to fix it as soon as possible. And keep in mind that hitting the quota will mean nothing could be written anymore and all async writes in RAM will be lost. So you will lose more or less data and maybe something will corrupt. Always better to have proper monitoring so you won't hit the wall in die first place.

And for the VMs, I (almost) only use templates, and the templates have SSD emulation and Discard turned on.
qemu-guest-agent is auto installed and enabled by cloud-init so everything should play nice with each other.
Thats not enough. You really have to mount your guests filesystems with something like the "discard" option for ext4/xfs by editing the fstab. Or set up a systemd service or cron job that will initialize a "fstrim -a" every hour/day/week.
 
In the docs it tells me that by default proxmox has already installed zfs-zed.
And that by default root user will be emailed. (I always configure the root email address in the proxmox ve web interface so that I can recieve emails and I would asume this also covers the zfs-zed emails.)
Doc: https://pve.proxmox.com/wiki/ZFS_on_Linux#_configure_e_mail_notification

And for the discard part, I will have to look into that.
I mostly use Debian Cloud genericcloud images.
So I would asume the discard option is not configured by default and I will have to make a couple of changes to my could-init config to automaticly do this for me.
 
n the docs it tells me that by default proxmox has already installed zfs-zed.
Installed yes, but did you configure its config files to tell it what to monitor, when to alert and did you set up the postfix mailserver that comes with PVE so emails send to the root user are forwarded to your real email address? Not sure if that also changed with the introduction of PVE notification system with PVE 8.1, where PVE is now capable of sending notifications without configuring postfix. But at least until PVe 8.0 you had to set up your own mail server for zed/smartd/PVE to be able to send mails.
 
Last edited:
What I do for the email is configuring a SMTP server that is not hosted on the same server.
Then configure a email for my root user so that I get an email if something happens.
And lastly I create a new filter for my SMTP entry so that it will send all email to that one.

I will look into the settings of zfs-zed for emails however, since I do not know how the Proxmox VE 8.1 installer configures it by default.

And I only have 1 server (no cluster) that will run Promox VE 8.1 when I finally deploy it.
Since I will use the latest stable version when I am deploying the server.
 
What I do for the email is configuring a SMTP server that is not hosted on the same server.
Then configure a email for my root user so that I get an email if something happens.
And lastly I create a new filter for my SMTP entry so that it will send all email to that one.
Correct.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!