New to proxmox: which filesystem(s)?

the other

New Member
Apr 12, 2025
9
4
3
Hey erveryone,
first post from someone just starting into proxmox...so hello everyone!!
I started with using my old pc hardware for trying out proxmox.

After doing some reading here and elsewhere, I am about to set a new system up.
Plan:
one ssd for pve
2 ssds for vms
Since it is pure home usage, I am unsure about the filesystem setup...so any advice / idea is welcome.
For PVE system disk (one 1 datacenter ssd > no consumer after many hints) I would go for ext4.
After setting up proxmox I would then add a zfs pool (with 2 ssds > datacenter, non-consumer hardware as well).

System will not run on dedicated server hardware (home usage, energy, money ;)): it will have non-ecc RAM 64 GB, a Ryzen5 5600G cpu, normal mainboard.

Any opinion on this? Does a setup like that make any sense? Goal: main home usage, around 3-4 VMs (1 with docker, 1 with pbs, to play around with) and a handful lxc...
For storing data, I mount shares from my synology NAS.
Ext4 for single proxmox system disk ok?
ZFS Mirror for 2 VM-ssds ok?
Or rather use that non-recommended software raid (working with debian, set up software raid, then install proxmox) with ext4 and lvm-thin?

Thanx for any input... :)
 
Last edited:
Hey everyone,
so I guess my question is too simple...no input yet. :)
I guess I will go for ext4 with lvm-thin then (in addition to that planned zfs pool (mirrored) for vms and local backup space).
Another question came up: my single system ssd for proxmox is 480 GB, quite big for that, I guess. I read that during installation one can configure the size of root partition...Ususally it ist around 1/4th of ssd total size. That seems a little to big. How can I change that during installation process?
Thanks again for any input helping someone new w/ proxmox. :)
 
That seems a little to big
What benefit do you have by making it smaller? I like to use ZFS for everything because all the default storages share all the pool's space.
With the default LVM/ext4 you have to assign a certain amount to local/local-lvm and local cannot use the space assigned to local-lvm and vice versa.
I rarely use the same physical disk for both OS and guests but still.
 
Last edited:
Hey there,
again, thanks for your input (and greetings, guess you are the SteveITS from netgate forum?)...
I will consider that and do some more reading on zfs...
 
SteveITS from netgate forum
1f44b.png
:)
 
  • Like
Reactions: the other
Any opinion on this?
Yes. And a strong one ;-)

 
Hey there,
ha, yeah, I read your post before. Thanks for reminding me... :)
So, the general opinion (strongly) leans toward taking that single ssd drive (480 GB), use it as zfs mirror 0, put those other 2 ssds in mirror zfs raid1?
Guess it's just my fear of something completely new (zfs) making me consider ext4 inside me...
Thanks for your input!
 
  • Like
Reactions: UdoB
So, the general opinion (strongly) leans toward taking that single ssd drive (480 GB), use it as zfs mirror 0,
No, that makes no sense. One single device can't be a mirror ;-)

Sorry, I have no good advice regarding a lonely device.

Technically you can use it standalone, also with ZFS. But I will never recommend that.

On the other hand I will prefer ZFS on a single device over all alternative filesystems. But maybe that's just me...
 
  • Like
Reactions: the other
hey there,
thanx for clarifying that matter. So (yeah I know, raid for system would be nice, but money isn't growing in my backyard yet) just use zfs instead of ext4 and thin, even tho it's just 1 disk...
Just read about using 2 partitions on a single disk (as a possibility) as a raid mirror in order to make usage of repairing corrupted data...?
Since none of my ordered hardware has arrived yet, I give it a few more thoughts (and maybe some opposite opinions leaning towards ext4?).
 
Just read about using 2 partitions on a single disk (as a possibility) as a raid mirror in order to make usage of repairing corrupted data...?
Well..., I would like to say just "no".

But then..., ZFS can do that, as an integral functionality, not "tricked".

Code:
~# zfs get copies rpool
NAME   PROPERTY  VALUE   SOURCE
rpool  copies    1       default

~# zfs set copies=2 rpool/dummy
~# zfs get copies rpool/dummy
NAME         PROPERTY  VALUE   SOURCE
rpool/dummy  copies    2       local
This will store each block at two different physical locations. Be assured performance will drop...

----
Code:
~# man zfsprops

     copies=1|2|3
       Controls the number of copies of data stored for this dataset.  These copies are in addition to any redun‐
       dancy provided by the pool, for example, mirroring or RAID-Z.  The copies are stored on different disks, if
       possible.  The space used by multiple copies is charged to the associated file and dataset, changing the
       used property and counting against quotas and reservations.
 
Last edited:
  • Like
Reactions: the other
hey there again,
thanks for your input. No, I don't intend to do something like that, just read about it and then mixed it up in my head.
I have this one single disk for proxmox itself. I want to keep it simple. So, since there wont even be any running vms on that disk (planning on just having proxmox, its settings and a pbs) and so far there seems to be no option to backup the system disk out of proxmox itself (am I right? if not, sorry), I wonder what zfs will bring on the plus side for that setting.
With thin I could use snapshots as well. You see, kind of a whirlwind of thoughts in my head...so: sorry for asking those basic questions... :)
 
For me it's more about why not? Realistically you probably don't benefit from ZFS when only used for a single boot only drive but if you ever want to do more than that with it you will already have the potential to do so.
 
Last edited:
Hey there,
Since you seem to share my idea
Realistically you probably don't benefit from ZFS when only used for a single boot only drive
What are your thoughts about being more RAM consuming (as far as I understand) than ext4+lvmthin without said benefits?
Being completely new to the topic, that seems like a strong plus for using ext4...or am I missing something? System will have 64 gb RAM, assuming that planned zfs mirror is already taking its ram share, I naively tink: why give away more if no real benefit?
To be clear, I don't want to argue against zfs, those are just questions I stumble over. ;)
 
Hey there,
Since you seem to share my idea

What are your thoughts about being more RAM consuming (as far as I understand) than ext4+lvmthin without said benefits?

This is nonsense, by default ZFS uses part of the RAM as cache. This is a good things since it speeds things up. The cache size can be configured:
https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage

And the other features of ZFS like compression also needs some resources.
As explained in this thread it's also more flexible than LVM so you can more easily use space not used by the OS for other things ( e.G. ISO imaes)
 
  • Like
Reactions: UdoB and the other
Hey there,
Thank you. That article put things in relation a bit more.
Indeed, I forgot to mention the point of allocating space more freely. Pointing that out again is appreciated!
So, that indeed helped me make up my mind and so I go with the flow ;) and start out with zfs all through.
Thank you all for helping, a very nice community. :)