Configuring multiple storage locations

praenuntius

Active Member
May 19, 2017
6
1
43
25
Milwaukee, WI
clevelandcoding.com
Can anyone walk me through the process of running multiple storage locations in Proxmox VE after pretty intensive searching on the Proxmox Forums and google I have been unable to figure out how to do the following.
  • Configure 128 GB SSD for running VMs on -
  • Configure 3x4TB WD Red drives in Raid-Z1 for storing VM data
I think this would be the best configuration for what I want to accomplish
  • 4 VMs (VPN, Sandbox, Cloud&NAS, Game)
The VPN obviously wouldn't need much storage and neither would the sandbox but I want to have all the data managed by the cloud and game servers stored in the raid Z array for redundancy with the most critical data backed up to external drives weekly. I think that having an SSD running the VMs OSes would be the best for performance since the WD red drives are only 5400RPMs. Thanks in advance!
 
Never, never ever ever build a single point of failure - in your case your SSD. You can use the SSD for caching of some sort but you should store everything you have on your RAID.

In your configuration, you will loose everything: every VM os with the configuration and your pve host itself.
 
  • Like
Reactions: praenuntius
Never, never ever ever build a single point of failure - in your case your SSD. You can use the SSD for caching of some sort but you should store everything you have on your RAID.

In your configuration, you will loose everything: every VM os with the configuration and your pve host itself.
that's right,
at least get a second SSD and run in Raid-1 mode. nice uptime :)
 
Thanks for your suggestions. From my understanding of your answers, I should be able to create the Raid-Z1 array while installing Proxmox and if there are noticeable performance issues I should be able to add an SSD cache to the pool following the instructions on this page https://pve.proxmox.com/wiki/ZFS_on_Linux . Please let me know if this information is accurate. Thank you!
I am not overly hot about ZFS so I am not 100% sure of the terminology here,
but when you install ZFS starting from VE 4.3?! (at least that is when I see it first)
you have a choice of several raid level you can use, when selecting ZFS as install file system
in my test I simply select ZFS (RAID-1) mirrored volumes. if that is what you mean than yes.
I am not sure what Raid-Z1 is. from what I can tell it looks like a ZFS variation of raid-0(??)
 
  • Like
Reactions: praenuntius
I am not overly hot about ZFS so I am not 100% sure of the terminology here,
but when you install ZFS starting from VE 4.3?! (at least that is when I see it first)
you have a choice of several raid level you can use, when selecting ZFS as install file system
in my test I simply select ZFS (RAID-1) mirrored volumes. if that is what you mean than yes.
I am not sure what Raid-Z1 is. from what I can tell it looks like a ZFS variation of raid-0(??)
From my understanding and test machine that I run in VirtualBox Raid-Z1 is the equivalent of Raid-5 without the I/O penalty. When I start the install in VirtualBox I have Raid-Z1 as a storage setup option.
Google non standard raid levels for more info
 
From my understanding and test machine that I run in VirtualBox Raid-Z1 is the equivalent of Raid-5 without the I/O penalty. When I start the install in VirtualBox I have Raid-Z1 as a storage setup option.
Google non standard raid levels for more info
Thanks, as I posted my reply I did some googling and found what Raid-Z1 is.
still I do not like it. I preffere mirrored setup for OS drive. also with ZFS you can do a 3-way mirror with 3 drives, right?
 
I haven't found anything that shows creating a 3-way mirror. Personally, I think I'm going to go with Raid-Z1 with 3 x 4TB Reds for the time being as this is about the limit of my splurge budget at this point. Looks like I'm going to play around a bit and decide if I should try to add an SSD cache to the setup. I'm not sure that an SSD will be necessary as I have 32 GB of RAM in the system.
 
I haven't found anything that shows creating a 3-way mirror.

It is possible. You can have as many mirrors as you like. ZFS is immensely customizable in that fashion.

Personally, I think I'm going to go with Raid-Z1 with 3 x 4TB Reds for the time being as this is about the limit of my splurge budget at this point. Looks like I'm going to play around a bit and decide if I should try to add an SSD cache to the setup. I'm not sure that an SSD will be necessary as I have 32 GB of RAM in the system.

The performance depends heavily on your workload, also the 32 GB makes me smile :-D yet not because it is huge, but it is the exactly opposite, but I'm used to bigger servers. For a home or small office use, the system should be sufficient and you will have a lot of fun with ZFS and its snapshot capability. Adding a SSD to speed things up depends heavily on the used SSD, so e.g. have a look at [1] first.

Best,
LnxBil

[1] http://www.sebastien-han.fr/blog/20...-if-your-ssd-is-suitable-as-a-journal-device/
 
Just my 2 cents on this topic,
  • If I was learning this new (maybe this is the case for you?) I would not start with ZFS with my first proxmox
  • non-raid config only is really suitable for throw-away test-lab purposes. SW raid or HW Raid are really not hard nor expensive. Disks are cheap, and generally data is worth way more than the disk holding the data.
  • Proxmox onto stock debian install is well documented in the wiki; and on the interwebs, it is well documented how to do a SW Raid Debian install. So you can ~fairly easily do a SW raid Deb Minimal install, then add proxmox after the fact (just be sure to use LVM config as per the proxmox install guide). End of this you will have SW Raid / disk fault redundancy / decent baseline.
  • If you want to be fancy(ish) you can read about 'bcache' which lets you use SSD as cache/accelerator for your SATA Block storage. For such a config, you might for example (a) have a pair of SSD drives; do SW raid Proxmox install using only the first ~60gigs space (b) hold back remaining SSD space for bcache-cache-space (SW raid under hood for fault tolerating); (c) use pair of slow sata bulk storage disks as your bcache block storage disks. With ths config you have nice fast proxmox base install on SSD SW raid; bcache accelerated SATA bulk storage; and a nice simple storage model with limited complexity and confusion for maintenance down the road. :)

If you really want to keep it 'simple' and don't want to fuss with raid. Just do vanilla install on disk, non raid, accept that they are throw-away and are gone when disks fail inevitably; and be sure to have proxmox configured to do regular (nightly?) backups of your VMs to some-other-disk so that when you lose everything, at least you have a fairly recent backup to recover from :)

-Tim
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!