Best disk setup for Proxmox

Crack92

New Member
Jan 26, 2019
4
0
1
32
Hi everyone,

I started using proxmox during the last two weeks in a machine with the following specs:
Ryzen 2700X
32GB ECC memory
4x 4TB WD Red
1x 500GB ssd Crucial
NO dedicated Raid Card

Altough i'm a very beginner in kvm virtualization, the goal is to replace different little servers I have in my small business with VMs and LXC containers.

I plan to spin up 4 to 6 VMs/CTs and I need support for snapshot and automatic backup of the data that will be in those instances, one of these instances will probably be OpenMediaVault for SMB share with Windows hosts, others will be Intranet WebApp with little SQL and NoSQL DBs.

I'm struggling to understand what would be the best disk setup for my purposes. Should i run the VMs from SSD partioned in ext4, and then make the backup to my HD pool? or should i build a ZFS Raid 10, add the ssd as a cache mechanism and run my instances from there? or maybe is better to stick with old 'safe' software raid with mdadm, altough is not officially supported by proxmox?

I also have an older Synology with 2x1TB disks which I plan to add as an extra backup station, for the very important stuff. How i'm supposed to fit this other piece in the puzzle?

Sorry for these basics questions but it's the first time for me. I googled and read the wiki a lot before writing but still haven't ironed out my doubts.
 
I would recommend a ZFS RAID10 with the 500G SSD as a L2arc cache
ZFS RAID is better that mdadam anyday
ZFS Likes RAM and uses that to cache too so make sure you either buy more ram or limit the ram that zfs can use using sysctl.conf
 
I would recommend a ZFS RAID10 with the 500G SSD as a L2arc cache
ZFS RAID is better that mdadam anyday
ZFS Likes RAM and uses that to cache too so make sure you either buy more ram or limit the ram that zfs can use using sysctl.conf

Thank for your suggestion. Do you think 32GB is enough ram? for my VMs i will use approx 12GB. And what is a good value to limit zfs ram? is 8GB enough?
 
Give How much ever RAM you can to ZFS and trust me it makes a difference. More the RAM ZFS can use to cache. more the performance

https://pve.proxmox.com/wiki/ZFS_on_Linux#_limit_zfs_memory_usage

if you need 12GB for your VM. I would give ZFS 16GB Max. :)

Thanks again. I will follow your suggestion. The last doubt I have is should i select ZFS right from the installer or is it better tu install on ext4 and create zfs pool later? Doing so I should keep UEFI boot
 
I have a bunch of disk laid out in a tiered system :

tier 1 and 2 disk are NVMe attached to the BUS
tier3 is 2 SSDs in raid1
tier4 are 4x Hitachi Nexstar 5GB drives in raid10

I use tier1 and tier2 for the VMs, containers, and templates

I use tier3 for VMs that I want extra redundancy with such as my NAS/cloudsync

and tier4 I use as cold storage for documents/pictures/applications

I had partitioned 8GB off my main proxmox install to use as a ZIL disk, but having played around with both ZIL and L2ARC I would say that both are a waste of time.

ram should be increased instead of utilizing L2ARC, and ZIL is too slow, even on SSD since the writes have to be synchronized.
 
Buy another 500 gb ssd, then install proxmox on zfs raid 1 of both ssd, use the rpool (ssd raid1) for VM disks that need speed (operating systems, databases, and so on)
Create a zfs RAID10 on the 4 spinning disks and use this pool for VM disks that store less used datas or datas that need less speed.
So you can use the full speed of ssd and have a big storage too
Backup vzdump to synology if you have enough space
 
  • Like
Reactions: Crack92
Thanks. Mbaldini. This is what I plain to do.
In the meantime i'm experiencing with various combination of storage and find that the best that suits me is to have 400GB in the SSD with LVM/ext4 where I have the root and some VMs which need fast I/O and a ZFS RAID10 made up of the 4x 4TB Disks with the 100GB left from the SSD used as cache and log for zfs.
In the zpool i'm putting containers and vms which write and read not so much, and i'm doing vzdumps of my machines in a directory mounted in the same zpool.
What do you think about this setup?

Anyway I'm struggling with PCI-ex passthrough to a WinXP VM. When i pass the device to the VM my host lose the NIC for some strange reason :(
 
In the meantime i'm experiencing with various combination of storage and find that the best that suits me is to have 400GB in the SSD with LVM/ext4 where I have the root and some VMs
Without RAID 1, in case of a SSD failure, your server will have a downtime, because you have to reinstall Proxmox and recover the VMs and configuration. I won't do that.

with the 100GB left from the SSD used as cache and log for zfs.
Don't do that, use the whole SSD for your fast pool, if you want more ARC cache, add RAM, always better than ARC L2

and i'm doing vzdumps of my machines in a directory mounted in the same zpool.
What is the point of having a backup in the same server that you are taking the backup of?
Backups must be done on an external support, be it a NAS, USB disk, another proxmox server (ZFs replication), and so on. A backup in the same server is totally useless.

Anyway I'm struggling with PCI-ex passthrough to a WinXP VM.
Windows XP is EoL from April 2014, no more support to it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!