What Configuration should I use for Small Businesses?

koredeye

New Member
Dec 31, 2025
5
0
1
Hi All,

Recently our small MSP has decided to move away from ESXI and move to Proxmox for our client server deployments. As I am heading up new server projects, I am going through learning this new environment and need some direction as to what the proper setup should be for our use case.

Before we made this decision to switch over, our few latest server deployments were going to use ESXI with x3 4TB drives in a RAID-5 running a singular Windows Server VM.

After doing "enough" research, I decided to install Proxmox on the new servers in RAID Z1 thinking this would be the same as RAID-5 and installed 1 windows server VM using a guide.

I then happened upon this post [here] that may indicate I may be doing something wrong.

If anyone has some basic configurations I could reference for this use case or direct me to some other guides I would really appreciate it.
 
Do you have a raid card installed?
Raid cards are most of the time NOT compatible with ZFS. (As ZFS requires direct disk access and not all raid cards allow that.)
There is a RAID card installed but I did not setup an array with the card. I think the proxmox install allowed me direct access to the drives and installed the software RAID with zfs. Would that mean the card is compatible?
 
And also would it be possible to maybe get one extra 4TB drive?
With 4 drives you can use ZFS mirror (Raid 1) instead of Raid5/RaidZ1

In my personal experience, it is not worth the trouble of dealing with anything other then mirrors unless the server has 8 or more drives.
I understand that the downsize of Raid 1 may not be ideal as it eats a lot of usable space but mirrors are also really easy to setup and use long term.
 
There is a RAID card installed but I did not setup an array with the card. I think the proxmox install allowed me direct access to the drives and installed the software RAID with zfs. Would that mean the card is compatible?
Just because Proxmox VE allows you to install it this way does not mean it is supported. (And also it is not supported at all.)
It is well documented that even though hardware raid cards do not prevent you from using ZFS, it will create issues if it does not allow pass-through.
ZFS requires direct access to the drives itself. https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html

If your cards allows pass-through or has a firmware that you can flash that allows the card to just pass-through the drives then it fine to use ZFS.
If you card does not then you SHOULD NOT use ZFS as it will create issues.
 
Last edited:
Just because Proxmox VE allows you to install it this way does not mean it is supported. (And also it is not supported at all.)
It is well documented that even though hardware raid cards do not prevent you from using ZFS, it will create issues if it does not allow pass-through.
ZFS requires direct access to the drives itself. https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html

If your cards allows pass-through or has a firmware that you can flash that allows the card to just pass-through the drives then it fine to use ZFS.
If you card does not then you SHOULD NOT use ZFS as it will create issues.
Thank you for the clarification! It seems like our card (ThinkSystem RAID 940-8i 4GB Flash PCIe Gen4 12Gb) does support pass-through so I believe I am in the clear there at least .

Regarding the use of ZFS Mirror over RAIDZ1, at the moment, I don't have access to any more drives without going through a whole ordeal with quotes and purchase timelines from our distributor. Is there a significant downside or issue that we will experience running it in the current configuration?

If it comes down to it and we need to purchase more drives (they are pretty expensive for our small client), we will bite the bullet and deal with it but I am just trying to deal with what I've got at the moment.
 
It is not at all required to get a extra drive.
I am mostly saying that because when someone makes a server that needs redundancy, they generally have a extra drive somewhere.
And thus in that case would recommend to not try and get burned by the ZFS complexity. (Since ZFS has a steap learning curve when things go wrong and when things go wrong, that is not the time to learn ZFS.)

And given that your card supports pass-through, it should not cause issues.
Did you configure ZFS via the installer or did you configure it via a different way?
 
Last edited:
It is not at all required to get a extra drive.
I am mostly saying that when someone makes a server that needs redundancy, they generally have a extra drive somewhere and thus I would recommend to not try and get burned by the ZFS complexity. (Since ZFS has a steap learning curve when things go wrong.)

And given that your card supports pass-through, it should not cause issues.
Did you configure ZFS via the installer or did you configure it via a different way?
Correct, ZFS was configured via the installer.
 
Oke, then I would think that everything would just work.
What are the errors that you get? (Or is it not even booting?)
Everything is working fine at the moment. I just was looking for some suggestions/clarifications as these are my first deployments of proxmox and I want to do things as basic as possible so I wont be completely lost while my team and myself are learning.
 
For as far as I know (and from my personal experience with ZFS) it should be fine.
It is more a case of every option has its pros, cons and issues.

The big thing with Proxmox VE is that it writes A LOT of (logging) data to the drives.
And thus completely destroys consumer grade drives. (Generally in months if it is really cheap consumer grade drives.)
And given that ZFS raidZ also moves a lot of data between the drives, you get a lot of write amplification that will completely destroy consumer grade drives.
And that is even before factoring in VM disks and the writes the VMs will be doing.

This is also why everyone who does not run a homelab with there own money will say ZFS requires a Enterprise grade drive. (As there TBW is way higher.)
 
Last edited: