Building new system, looking for storage advice.

bignose

New Member
Nov 23, 2022
5
0
1
I bet you guys get this all the time. Anyways here's my turn.

Use case:

- Home lab. I'm not hosting Plex, and I have no need for yuge storage. A few TB is plenty.
- I intend to have 10 to 20 VMs doing various workloads like gitlab, influxdb, grafana, consol, nomad, postgres
- If a drive goes poof, I won't lose sleep over it. The plan is to automate everything and back everything up to another machine. Since nothing is production workload downtime doesn't bug me.

Case, motherboard and ram:

- Initially 32gb of ram but probably going to 64
- Case has room for 6 drives. 3 5.25 and 3 3.5, I don't need the capacity of spinning disks so they'll probably get filled with SSDs over time.
- Motherboard: I'm tempted to get one that takes 2 NVMEs but I don't fully understand their drawbacks that I've read allusions to in this forum.
- To keep this simple if every storage device will probably be 1TB. 1TB ssds and 1TB nvmes.

Growth and expansion:

I'd like to get this project off the ground with just one drive to start with the ability to easily add more. A little bit of redundancy wouldn't hurt but I'm not editing video files, so blazing performance isn't top of list. (But I will be cloning vms)

File system advice:

I currently have a proxmox machine with one drive. I was using packer to try to clone a vm and got this error:

Code:
==> proxmox-clone: Creating VM
==> proxmox-clone: No VM ID given, getting next free from Proxmox
==> proxmox-clone: Error creating VM: 500 Linked clone feature is not supported for drive 'scsi0'
Build 'proxmox-clone' errored after 97 milliseconds 19 microseconds: Error creating VM: 500 Linked clone feature is not supported for drive 'scsi0'

Which ever choice I make with this new machine, I would like very much to be able to complete this operation.

Thanks folks.
 
First you should decide how professional you want your server to be. For example, in case you want PVE to manage the raid then ZFS will be used and then it would be highly recommended to buy ECC RAM and way more expensive enterprise grade SSDs.
When you don't care that much about stability, downtime and integrity or loss of your data then just using single consumer SSDs with LVM-Thin might be fine.
 
Last edited:
I don't need or want it to be professional. My goal is just a sandbox. ZFS looks like it adds over head and preparation that I don't care about.

LVM-Thin sounds about right and I can just treat each drive separately and move stuff around as I need to like the good ole days. Maybe at a bare minimum I'll buy drives in pairs and mirror them.
 
So in this scenario, of just running VMs, do I gain or lose anything specifically from having NVME vs SSD ?
 
Maybe at a bare minimum I'll buy drives in pairs and mirror them.
Then you would need to do that manually using the CLI or using the mainboards pseudo-hardware-raid, as PVE officially only supports ZFS for mirroring.
 
So in this scenario, of just running VMs, do I gain or lose anything specifically from having NVME vs SSD ?
NVMe SSDs are using PCIe, so they might be individually passed through into a VM. Otherwise it depends on the workload. A enterprise SATA SSD for example might be a magnitude or even more faster compared to a consumer NVMe SSD when doing sync writes. So it also highly depends on the quality of the NAND chips (best to worst: SLC > eMLC > MLC > TLC > QLC) and if you got a powerloss protection to be able to cache sync writes. And not just what controller or interface is used. Buy a 6000MB/s consumer NVMe SSD and you can bring it down to a few MB/s (or even KB/s range with QLC and full disk) when hitting it hard with the wrong workload.
Its basically, the more you pay, the better performance and durability you will get...exception are NAS or modern prosumer SSDs. Buy a 80€ 1TB SSD and it might be killed in a few months and will be unusable slow when running workloads like DBs. Buy 1TB of SSDs for 3000€ and it will be super fast and last for many years. I wouldn't buy neither of them, but something in the middle for maybe 150-300€ per TB.

Way more interesting than the theoretically maximum possible IOPS/throughput the manufacturers advertise it with, is the minimum performance that the SSD will at least offer, even under the hardest conditions. And this you will usually not find in the datasheets. So not the best idea to just look at the advertised performance numbers for a good buying decision. Also look at stuff like the DWPD or TBW ratings which make a great indicators of quality.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!