Beginner-Questions about storage best practice and setup

MarkusH87

New Member
Hi there

I want to change my home server from Hyper-V to Proxmox VE.
In the mean time I was able to migrate the sessions and get them running on temporary hosts.

Now I need to setup the main server where I need some advise about the storage configuration.
I'm not very used to linux and the different file systems and so on, so I'm glad about detailed answers and step-by-step hints.
Most of the virtual sessions will run Microsoft Windows as guest OS.

Here an overview about the available disks in the physical host and how I thought to use them:
  • 1x 500GB SSD - Idea: Proxmox OS, ISO Images
  • 1x 1TB SSD - Idea: Primary session disks (Operating systems)
  • 4x 4TB HDD - Idea: Raid5 to have 12TB available for multiple virtual disks for different sessions (3-4 virtual file servers)
  • 3x 1TB HDD - Idea: Raid0 to have 3TB available for multiple virtual disks for different sessions (1-2 virtual file servers)
I know, that it's not recommended to use virtual disks for file servers - but this I can't change at the moment.
My idea is that, if one of the 4TB goes down, the file servers can still provide the data shared.
Of course there are different nicer ways - but unfortunately the physical configuration is unchangeable at the moment.

So my final questions are:
  1. Is the planned configuration somehow "acceptable"?
  2. What would you recommend to change in my idea?
  3. Would it may be better to handle the physical disks as seperated storages and build a software raid within every guest OS needed?
  4. How to implement this configuration the easiest way? (I never ran the setup on a host with multiple disks)
Thanks in advance for any feedback

Regards

Markus
 
Hey!

Out of the door i'd recommend using ZFS for all the RAIDs, with added value being data integrity check. As for the RAIDs, you normally use RAID0 when you need to improve RW speed. RAID5 is not advised, especially for the capacities you've stated. Depending on how constraint you are with data volume, i'd go for RAID-z2 (ZFS with parity, similar to RAID6) or RAID10. As for the RAID0 - hard to figure out something else rather than RAID1 with a spare, so i'd keep it in RAID0 (RAID-z0 in ZFS), but use it only for non-critical backups (e.g. daily ones) and ISOs.

I'd put Proxmox on RAID-z2 (4x4TB) since i haven't seen added value in having specifically the hypervisor on the SSD (VMs should preferably go on SSD, though). At the same time, Proxmox being Debian based - makes a lot of disk IO and therefore can significantly consume the SSD endurance. The 500GB SSD can be used for ZIL/SLOG cache for the RAID-z2 (4x4TB), to compensate for the Slow IO (although i do not expect significant impact with it, probably on copying big files here and there). On this RAID-z2 (4x4TB) i'd also keep all the "slow" VMs and containers - where IO specs are not critical (e.g. with "cold" Storage Servers where you are likely to expect mostly sequential RW).

I'd use the 1TB SSD for hosting VMs and containers where IO specs are critical.

Last, i'd use RAID0 either for some non-critical VMs (preferably those backed up on RAID-z2 or some NAS), for non-critical backups and ISOs.

It'd help knowing the specs of VMs hosted and what data you want to store there. Keep in mind that each setup is unique and you can play with moving the data around the SSD <> RAID-z2 <> RAID0.
 
@Vladimir: Thanks for your explenation.

First of all - a quick overview of my environment:
I'm running a couple of small windows servers providing single services (AD+DNS+DHCP, SVN, SQL, Terminal Server), a plex media server and a file server.
Further more a backup server is planed to manage the backups from all the critical data to internal and to external storages.
Currently I'm running Hypver-V on the physical server which also acts a file server.
As there are only 2 users there is no need for super-high performance (as long as media playback from PLEX server works fine).

After I thought about your inputs and the situation I did a new plan:
  • 1x 500GB SSD
    Maybe I will replace this with an other 1TB SSD to build a mirrored storage with the other 1TB SSD)
  • 1x 1TB SSD - single ZFS storage
    High Performance VMs and disk images if needed
  • 4x 4TB HDD - ZFS RAIDZ-1
    Promox OS, most of the VMs and disk images
    (Z-2 would give me only around 8TB of usable space, which is to less)
  • 3x 1TB HDD - 3 single ZFS storages
    ISO images, disk images for the backup server VM (containing un-critical daily backups)
Is it by default also possible to access the ZFS storage using FTPS to upload the ISO images in an easy way?
(as it is with the local default storage on a single-disk-server)

Would it be possible to change the ZFS storage from single disk to mirrored online, when I would replace the smaller SSD at a later time?
Or would I have to move all the data to another storage, create a new mirrored storage and move back the data?

Thanks for further hints and inputs

Regards
Markus
 
@MarkusH87
Is it by default also possible to access the ZFS storage using FTPS to upload the ISO images in an easy way?
The ssh is active: https://forum.proxmox.com/threads/console-ssh-login-does-not-work.10/#post-20
I assume you should be able to use FTPS upload, but this is not good security wise. You may need to explore the topic more in-depth.

Would it be possible to change the ZFS storage from single disk to mirrored online
A quick search gave me this: https://coderwall.com/p/zpb89a/creating-a-mirrored-zfs-pool-out-of-a-single-disk-rpool
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!