Best practice regarding storage type

jeremy_fritzen

New Member
Nov 29, 2020
17
0
1
35
Hi!

I'm new to Proxmox. I have some question about storage.
If needed, my current architecture is quite simple :
- 1 HP Microserver Gen 8
- 1 Intel Xeon E3-1220 V2 3.10 GHz
- 16 GB RAM
- 1 USB Key for Proxmox
- 4 HDDs (3 TB each) and 1 SSD (256 GB) and Proxmox


Regarding my installation:
- What storage device should I use to host ISO images & container templates? VM images & container storage?
- Why doesn't Proxmox allow to select ISO images or container templates as content for a ZFS storage?
- Can I create a ZFS storage to store ISO, templates, VM images and containers? If so, how can I do that?

Thanks again for your help!
 
Last edited:
Hi!

I'm new to Proxmox. I have some question about storage.
If needed, my current architecture is quite simple :
- 1 HP Microserver Gen 8
- 1 Intel Xeon E3-1220 V2 3.10 GHz
- 16 GB RAM
- 1 USB Key for Proxmox
- 4 HDDs (3 TB each) and 1 SSD (256 GB) and Proxmox


Regarding my installation:
- What storage device should I use to host ISO images & container templates? VM images & container storage?
HDDs are terrible as a VM/LXC storage so I would use them for cold storage, ISOs, VM/LXC Backups, container templates. I would use the SSD with LVM-Thin for system and as a VM/LXC storage.
- What doesn't Proxmox allow to select ISO images or container templates as content for a ZFS storage?
- Can I create a ZFS storage to store ISO, templates, VM images and containers? If so, how can I do that?
If you want to store other things than VMs/LXCs on a ZFS pool you need to use a directory storage. For that create a dataset (zfs create YourPool/NewDataset), go to Datacenter -> Storage -> Add -> Directory and point it to the mountpoint of that dataset (should be "/YourPool/NewDataset").
 
Good morning,
- 1 USB Key for Proxmox
Do not do that - it will fail on the (semi-) long run. At least without further countermeasure like moving log files away.

My personal approach for using USB-Storage for the Operating System is this:
  • use two (or three!) small (32GB or more) "good" SSDs with an an USB-Adapter, connected to different ports/busses without an additional hub
  • install PVE with ZFS on a MIRROR of all those devices
Personally I am using USB/NVMe drives for this use case. Some cheaper "normal" Sata-SSDs with USB-Adapter raised failures every two or three weeks in my setup. (Probably because of the combination of USB/Controller/Cable/Adapter-Hardware in itself.) This is a MicroServer Gen10.

ZFS is write intensive and it may not be the best choice for this approach. But because of all that checksuming and being Copy-On-Write and Mirroring I've never had any data loss. As long I can avoid it I will never go back to ext4 or other filesystems.

Alternative approach: simply install PVE on those 4 main disks. This way the independency OS-disk <--> Data drives is lost as there is just one single "rpool". This is not ideal for several reasons - but it works. And all four drives are bootable! On my own system I would like to switch from those USB "tricks" to install-on-harddisk. But this is not possible after the fact. I would need to reinstall the complete system. So please... choose wisely.

Please note also that ZFS needs some Ram (Default=50% of available Ram) to work. If possible increase that 16 GiB to whatever is possible. (My MicroServers have also only the default 16 GiB for now - but they are NOT used as general purpose PVE with several VMs on it.)

Yeah..., I am a ZFS-Fan --> my point of view might be biased - take it with a grain of salt please.

Best regards
 
  • Like
Reactions: jeremy_fritzen
On my own system I would like to switch from those USB "tricks" to install-on-harddisk. But this is not possible after the fact. I would need to reinstall the complete system. So please... choose wisely.
It's Linux so you can do whatever you want, including this. Just send/receive your system off-size, boot a ZFS-capable live linux and recreate the pool as you like, send/receive back. I've done that without any issues multiple times. If you're unfamiliar with it, just recreate your setup inside of PVE itself as a VM and play around until you know what you're doing.
 
  • Like
Reactions: takeokun
and recreate the pool as you like, send/receive back. I've done that without any issues multiple times.
Thanks for the reply/hint.

This is a (home-) productive system. And while I trust ZFS send/receive absolutely (and I also have plenty of independent backups) I hesitate to tinker with it. Since November it works fine the way it is - after replacing some unreliable parts - so "never change a running system" ;-) .
I just would not implement it this way (with USB) again.

Best regards
 
Thanks @UdoB for sharing your experience :)
Alternative approach: simply install PVE on those 4 main disks. This way the independency OS-disk <--> Data drives is lost as there is just one single "rpool". This is not ideal for several reasons - but it works
Could you elaborate a little bit?
What do you mean by "independency OS-disk <--> Data drives is lost as there is just one single "rpool" (I'm still not very comfortable with ZFS at the moment but I know that it is very powerful and would like to take advantage of it)?
=> Does it mean that the rpool is shared across all 4 HDD drives (mirroring between each of them) and then, as soon as there is only 1 drive, the OS is still able to boot? If so, I suppose it will also imply more intensive work (read & write operations) for all of these disks, right?

Thanks for your help!
 
Thanks @UdoB for sharing your experience :)

Could you elaborate a little bit?
What do you mean by "independency OS-disk <--> Data drives is lost as there is just one single "rpool" (I'm still not very comfortable with ZFS at the moment but I know that it is very powerful and would like to take advantage of it)?
=> Does it mean that the rpool is shared across all 4 HDD drives (mirroring between each of them) and then, as soon as there is only 1 drive, the OS is still able to boot?
Jep.
If so, I suppose it will also imply more intensive work (read & write operations) for all of these disks, right?
ZFS got a very big overhead. On the other hand, if you use a striped mirror (raid10) you disks can handle double the IOPS compared to a single disk.
 
Thanks @Dunuin !

Just few questions regarding ZFS:
- I'm limited to 16GB of RAM in my HP Microserver Gen8. Is it enough if I want to run these and use ZFS at the same time:
- 1 VM for Nextcloud
- 1 VM for a VPN server
- 1 VM for file sharing (CIFS / NFS at least).
- maybe 1 or 2 more VMs for testing, learning, etc.
- Can I create a zfs rpool mirror and add new disks afterwards?

Thanks for the help!
 
Thanks @Dunuin !

Just few questions regarding ZFS:
- I'm limited to 16GB of RAM in my HP Microserver Gen8. Is it enough if I want to run these and use ZFS at the same time:
- 1 VM for Nextcloud
- 1 VM for a VPN server
- 1 VM for file sharing (CIFS / NFS at least).
- maybe 1 or 2 more VMs for testing, learning, etc.
The more RAM you allow ZFS to use the better. And the more storage capacity you got the more RAM ZFS should be able to use. Plan atleast some GBs RAM for ZFS. You want something between 2 and 12GB RAM for your four 3TB disks. So your 16GB RAM really isn't that much. For a hypervisor you want as much RAM as possible (or atleast the maximum your mainboards supports or you can afford). If you give 4GB to ZFS and you reserve 2GB for PVE itself, then you only got 10GB for your guests. Also keep in mind that virtualization got overhead, so assigning something like 9GB to VMs might result in 10GB used. You can't really do much with 9GB RAM if you want to use VMs, especially because you shouldn't overprovision your RAM. My Nextcloud VM for example needs 4GB RAM. VPN server 0.5GB. NAS wants several GBs too (maybe 2-4GB for a OMV and 8-64GB for a TrueNAS).
Each Windows VM atleast wants 4GB RAM.
Each headless Linux VM atleast 512MB.
Each Linux with desktop atleast 2GB.
So 9GB RAM for all your guests can be a bit low.
- Can I create a zfs rpool mirror and add new disks afterwards?
Isn't that easy with the rpool, because only 1 of the 3 partitions is used for a ZFS raid. So you could do it but only if you manually partition the new disks first and then copy over the bootloader. And in case of a striped mirror you can only add new disks in pairs.
 
Last edited:
Isn't that easy with the rpool, because only 1 of the 3 partitions is used for a ZFS raid. So you could do it but only if you manually partition the new disks first and then copy over the bootloader. And in case of a striped mirror you can only add new disks in pairs.
Thanks.
Sorry, what "3 partitions" are you talking about?
 
Thanks.
I'm thinking... If 16GB is not enough for ZFS, why not using the real hardware RAID controller of my HP Microserver Gen8?

I've read that it's not recommended to use ZFS on Proxmox but I don't know at which level exactly (does it mean we can't use ZFS for Proxmox OS but we are able to use ZFS for another storage disk attached on a different controller?).
 
ZFS works perfectly fine with proxmox...thats also why it in the default if you want to use raid. But ZFS needs reliable and powerful hardware.
But if your disks are connected to a real HW raid ZFS is no option anyway because it needs direct acsess to the disks without. raid or other abstraction layers in between.
 
  • Like
Reactions: jeremy_fritzen
If 16GB is not enough for ZFS
It is more than enough to run it, it may not be as fast as you want. The features outperform the downsides, so I'd always go with it, especially with virtualization. To "save" som memory, I'd go with LX(C) containers instead of VMs, but that restricts you to Linux-based OSes.
 
It is more than enough to run it, it may not be as fast as you want. The features outperform the downsides, so I'd always go with it, especially with virtualization. To "save" som memory, I'd go with LX(C) containers instead of VMs, but that restricts you to Linux-based OSes.
Thanks.
Tell me if I'm wrong: It is far enough to run it at the beginning. But if I use it with my 4 x 3TB HDD, it will require ~12GB just to run ZFS correctly. So only 4GB remain to run Proxmox and VMs (or even containers). Would it be enough?
 
It will run with less then 12GB. Like I already said, 4GB for the ARC should work fine. But if your Microserver got a raid controller that can't be switched to HBA/AHCI/IT mode ZFS isn't an valid option anyway.
 
It will run with less then 12GB. Like I already said, 4GB for the ARC should work fine. But if your Microserver got a raid controller that can't be switched to HBA/AHCI/IT mode ZFS isn't an valid option anyway.
Thanks.
The Microserver Gen8 raid controller can be switched to AHCI.
 
Thanks.
The Microserver Gen8 raid controller can be switched to AHCI.
Ok, then its probably onboard SW raid and no real HW raid and ZFS should work. A point more to use ZFS, because onboard SW raid combines the worst things of of SW raid and HW raid: You are screwed when your mainboard dies and you want to access your data, CPU is doing all the logic instead of a dedicated chip, no advanced features like bit rot protection and write performance is bad because of missing battery backuped write caching.
 
Last edited:
  • Like
Reactions: jeremy_fritzen

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!