Which drives to chose for OS installation - Hetzner's 4 drive server

coldplug

New Member
Mar 18, 2023
5
0
1
[newbie alert]

I would like to hear your take on this:

I'm planning to get Hetzner's server with 4 drives as I need more I/O than I currently have there - my current and almost 10 yrs old server with 2 SATA drives (and OpenVZ) is heavily I/O bound.
Now I'm targeting a new auction server with something like 2 HDDs 8 TB + 2 SSDs 960 GB. And of course, I would like to move from OpenVZ to Proxmox, and I would like to use ZFS for everything - if that's not a bad idea (more on that below).

How would you plan to do this? Would you:
a) Install Proxmox on ZFS mirror to 2 HDDs, or
b) Install Proxmox on ZFS mirror to 2 SSDs
c) Doesn't matter a or b (probably)

d) Would you instead avoid ZFS for a boot device and use ZFS for data device only? But in that case I would have to install OS on single drive I guess and I lose redundancy and I'm left with unused drive?

I'm a bit anxious to use ZFS for boot device without having physical access to the server... what are chances of losing all data in case I chose ZFS mirror as boot device and installation get's corrupted for whatever reason and I can't access the server except with their rescue system or ordered KVM console? Also, an installation there is a bit tricky as it involves ordering KVM, becasuse Proxmox is not supported from their installation images (afaik unless something changed lately).

Let me know your thoughts,
Grateful,
Ivan
 
Proxmox, and I would like to use ZFS for everything - if that's not a bad idea (more on that below).
Good idea but keep in mind that it is high recommended to:
A) use ECC RAM
B) use enterprise/datacenter grade SSD with power-loss protection
C) not to use QLC SSD
D) not to use SMR HDDs
E) not to use HW raid card but a dumb HBA or onboard ports

And that ZFS is about data integrity not about performance. Nearly everything will give you better IO.


How would you plan to do this? Would you:
a) Install Proxmox on ZFS mirror to 2 HDDs, or
b) Install Proxmox on ZFS mirror to 2 SSDs
c) Doesn't matter a or b (probably)
Doesn't really matter as HDDs would be fast enough and as you can store guests on the same disks as the PVE system.
But I would leave some space on the SSDs unallocated so you can later manually create a partition there and create a special device mirror with these partition to boost the performance of the HDDs by storing all metadata on the SSD and only data on the HDDs.

d) Would you instead avoid ZFS for a boot device and use ZFS for data device only? But in that case I would have to install OS on single drive I guess and I lose redundancy and I'm left with unused drive?
There is no easy way to backup your system disks or PVE settings so I would always want to have redundancy for everything.


I'm a bit anxious to use ZFS for boot device without having physical access to the server... what are chances of losing all data in case I chose ZFS mirror as boot device and installation get's corrupted for whatever reason and I can't access the server except with their rescue system or ordered KVM console?
ZFS isn't less reliable as a root filesystem as any other option these days. I would even prefer it because of bit rot protection and so on.

Also, an installation there is a bit tricky as it involves ordering KVM, becasuse Proxmox is not supported from their installation images (afaik unless something changed lately).
Get a server with a BMC and webKVM. Then you always got access to the hosts console to fix stuff yourself.
 
Last edited:
First of all thank you,

I didn't want to say that ZFS is less reliable, it is more about being able to rescue the system if it fails. For example I'm thinking about performing regular upgrade and reboot, if server does not boot, what now if my boot device is ZFS? The thing is that I'm not that familiar with ZFS as I'm with traditional file systems, especially in those emergency situations. That's why I'm little bit afraid of using it - I'm pretty aware it is more capable, more safe and more advanced file system in general.

One more hypotetical question. Let's say I install Proxox on ZFS mirror consisting of 2 SSDs. Then I create second mirrored pool out of 2 large HDDs for my VMs. But, can I also *reliably* use boot device (SSD mirror) for any VM use - for example serving stuff that will enjoy faster reads compared to HDD speed? I guess, if I create VMs on that SSD pool and things go wrong and I have to reinstall Proxmox, VMs from that pool are gone?

IPMI/BMC, etc ... that is out of budget unfortunatelly.

Thx again
Ivan
 
For example I'm thinking about performing regular upgrade and reboot, if server does not boot, what now if my boot device is ZFS?
Then you need some webKVM/console to fix it like with every other OS. Any Live Linux with ZFS support like a Ubuntu can mount that ZFS pool and you can then chroot into it to fix stuff.
Then I create second mirrored pool out of 2 large HDDs for my VMs. But, can I also *reliably* use boot device (SSD mirror) for any VM use - for example serving stuff that will enjoy faster reads compared to HDD speed?
Yes, thats the default. There will be a "local" storage which is a "dir" storage pointing to a folder on the ZFS root-filesystem dataset. And another storage "local-zfs" of type "zfspool" which points to another dataset where the virtual disks will be created as datasets (for LXCs) or zvols (for VMs) as a child.
I guess, if I create VMs on that SSD pool and things go wrong and I have to reinstall Proxmox, VMs from that pool are gone?
Yes. But raid is no excuse for not having backups ;)
It's really easy to restore backed up guests in case you need to reinstall your PVE.

And by the way...running IO-intense VMs of HDDs won't be great. You probably really want those VMs on SSD-only storage with HDDs only as a cold storage.
 
Last edited:
Thank you.

I do have backups, I'm more concerned about lot of downtime that will arise if I'm not able to quickly fix issue and have to reinstall and restore everything from backups. Just network transfer of 2-3 TB of data takes lot of downtime, plus all other tasks.

But OK, the part where you said that any live linux with ZFS support can moung ZFS pool seems promising, which means that Hetzner's rescue system should be able too. Is there any documentation that helps with such scenarios of fixing non-booting ZFS pools? I would certanly want to try all that as a learning step before attempt to bring into production.

Thx ...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!