New in proxmox. Storage and configuration

Juanolo

New Member
Aug 5, 2021
5
0
1
39
Good evening, first of all, thank you for the help offered in this forum.

I am quite a novice and am having a hard time deciding how to go about installing the proxmox.

I have purchased a dell r210 ii with 32gb ram a 1TB SSD and a 3TB SATA HDD. I want to manage pFsense, Ubiquiti controller and some VMs for home assistant.

The truth is that I am very lost in the installation. How do you recommend me to do it? Is it feasible? I know there is no redundancy.

Another option I had considered is to buy 2 SSD of about 256GB to make ZFS.

Where would I install proxmox, the OS and VMs? I would like to use the 3TB disk as storage in case I virtualize FreeNas?

Thanks in advance ....
 
I have purchased a dell r210 ii with 32gb ram a 1TB SSD and a 3TB SATA HDD. I want to manage pFsense, Ubiquiti controller and some VMs for home assistant.
How many drives is that server supporting? (SATA or SAS, 2,5" and 3,5" bays)
What SSD and HDD models are these? For example for ZFS you might want enterprise grade SSDs and HDD that use CMR and not SMR.
The truth is that I am very lost in the installation. How do you recommend me to do it? Is it feasible? I know there is no redundancy.
If you just want a server that is reliably running without too many problems I would go for redundancy. If then a drive fails everything will continue working normally and you can replace the failed drive. If your disk dies without any redundancy you would need to setup everything again...and depending on how much effort you put into the configuration, that might be alot of work. Took me several hundreds hours to setup my homelab for example...
Another benefit of ZFS with redundancy is that ZFS will monitor your disks for data corruption and will auto repair them if you got redundancy.
Another option I had considered is to buy 2 SSD of about 256GB to make ZFS.
Keep in mind that enterprise/datacenter grade SSDs with powerloss protection are recommended here because they are faster, more durable and more reliable. Some cheap consumer SSDs might work but that really depends on your workload.
Also you can't just use ZFS with every server. ZFS needs direct access to the disks without any abstaction layer between the OS and the disks. If you for example only got a raid controller and can't flash it to IT mode or it got no HBA mode selectable ZFS isn't really a recommended option.
Where would I install proxmox, the OS and VMs? I would like to use the 3TB disk as storage in case I virtualize FreeNas?
FreeNAS/TrueNAS is using ZFS only. ZFS doens't make that much sense with just a single disk. Maybe OpenMediaVault would be better here because it is much simpler und needs less ressources.
 
Last edited:
How many drives is that server supporting? (SATA or SAS, 2,5" and 3,5" bays)
What SSD and HDD models are these? For example for ZFS you might want enterprise grade SSDs and HDD that use CMR and not SMR.

If you just want a server that is reliably running without too many problems I would go for redundancy. If then a drive fails everything will continue working normally and you can replace the failed drive. If your disk dies without any redundancy you would need to setup everything again...and depending on how much you effort you put into configurating everything that might be alot of work. Took me several hundreds hours to setup my homelab for example...
Another benefit of ZFS with redundancy is that ZFS will monitor your disks for data corruption and will auto repair them if you got redundancy.

Keep in mind that enterprise/datacenter grade SSDs with powerloss protection are recommended here because they are faster, more durable and more reliable. Some cheap consumer SSDs might work but that really depends on your workload.
Also you can't just use ZFS with every server. ZFS needs direct access to the disks without any abstaction layer between the OS and the disks. If you for example only got a raid controller and can't flash it to IT mode or it got no HBA mode selectable ZFS isn't really a recommended option.

FreeNAS/TrueNAS is using ZFS only. ZFS doens't make that much sense with just a single disk. Maybe OpenMediaVault would be better here because it is much simpler und needs less ressources.
Thanks for the quick reply. I can't afford right now the Enterprise SSD purchase.

Right now I'm using as SSD the Crucial MX500 1TB and a SATA HDD from WD, both connected directly to the SATA motherboard.... The purpose of the server is personal, to have it at home as a pFsense Router with a 4 port intel card. Containers: Home Assistant and Unifi Network controller, some VMs and some media manager...

Do you consider that buying the same 1TB SSD and putting them in ZFS, it will last long?

What would you put, the VMs, the proxmox, and the containers on the SSD and the 3TB SATA disk as storage for the VMs and containers?

Thanks!
 
Thanks for the quick reply. I can't afford right now the Enterprise SSD purchase.
You can also buy them second hand. Then they are cheaper than the new consumer SSD. Just make sure to ask for a screenshot of the SMART stats before buying to see how much life expectation is left.
Do you consider that buying the same 1TB SSD and putting them in ZFS, it will last long?
That really depends. That might kill your SSDs within months or maybe they will survive several years. There is stuff like write amplification that you need to take into account when using ZFS or virtualization. SSDs will take damage with each write and used in a server it will be written to 24/7.
Depending on your workload the write amplification might be lower or higher. I for example got a average write amplification of factor 20. So for every 1 GB of data my VM wants to write, 20 GB of data will be actually written to the SSDs NAND flash cells.
Your SSD is rated for 360 TBW over 5 years. That means that the manufacturer will replace your SSD if it is not older than 5 years AND has not written more than 360 TB. But now lets say you also got a write amplification like me. In that case you reach the 360 TB TBW after only writing 18 TB of data. So if you would got a write amplification of factor 20 and you don't want to loose the warranty before the 5 years are over you would be only able to write in average with 114 kb/s (360000000000000 bytes TBW / 20 write amplification / 157680000 seconds = 114155 byte/s). Now lets say you got 5 VMs that needs to share these 114 kb/s. Now your VMs only got 23 kb/s that they are allowed to write (on average of cause). If they write more the SSD might die and the manufacturer won't replace it. 23 kb/s isn't that much. Keep in mind that every VM will write logs and metrics 24/7 and will cache the RAM on disk.
Enterprise SSD should got a lower write amplification because they can cache sync writes and their life expectation is 4 to 30 times higher. So you pay more for them but if they survive 30x times longer this is way cheaper on the long term. And they are also magnitudes faster for server workloads like random sync writes so thats a nice benefit too.
What would you put, the VMs, the proxmox, and the containers on the SSD and the 3TB SATA disk as storage for the VMs and containers?
I wouldn't use a HDD as VM storage. Maybe you remeber how fast your Windos computer got when upgrading from a HDD to a SSD? Its the same here with VMs just that you run multiple OSs from that HDD. So If you try to run 10 VMs of one HDD it gets 10 times slower.
HDDs can only handle around 100 Operations per second so they hust can't handle all the IOPS from that multiple VMs running in parallel. SSDs can handle tenthousands or hundredthousands of IOPS instead. So you really want to use SSDs as your VM storage.

You also should considering installing a proxmox backup server becasue raid will never replace a backup.
 
Last edited:
You can also buy them second hand. Then they are cheaper than the new consumer SSD. Just make sure to ask for a screenshot of the SMART stats before buying to see how much life expectation is left.

That really depends. That might kill your SSDs within months or maybe they will survive several years. There is stuff like write amplification that you need to take into account when using ZFS or virtualization. SSDs will take damage with each write and used in a server it will be written to 24/7.
Depending on your workload the write amplification might be lower or higher. I for example got a average write amplification of factor 20. So for every 1 GB of data my VM wants to write, 20 GB of data will be actually written to the SSDs NAND flash cells.
Your SSD is rated for 360 TBW over 5 years. That means that the manufacturer will replace your SSD if it is not older than 5 years AND has not written more than 360 TB. But now lets say you also got a write amplification like me. In that case you reach the 360 TB TBW after only writing 18 TB of data. So if you would got a write amplification of factor 20 and you don't want to loose the warranty before the 5 years are over you would be only able to write in average with 114 kb/s (360000000000000 bytes TBW / 20 write amplification / 157680000 seconds = 114155 byte/s). Now lets say you got 5 VMs that needs to share these 114 kb/s. Now your VMs only got 23 kb/s that they are allowed to write (on average of cause). If they write more the SSD might die and the manufacturer won't replace it. 23 kb/s isn't that much. Keep in mind that every VM will write logs and metrics 24/7 and will cache the RAM on disk.
Enterprise SSD should got a lower write amplification because they can cache sync writes and their life expectation is 4 to 30 times higher. So you pay more for them but if they survive 30x times longer this is way cheaper on the long term. And they are also magnitudes faster for server workloads like random sync writes so thats a nice benefit too.

I wouldn't use a HDD as VM storage. Maybe you remeber how fast your Windos computer got when upgrading from a HDD to a SSD? Its the same here with VMs just that you run multiple OSs from that HDD. So If you try to run 10 VMs of one HDD it gets 10 times slower.
HDDs can only handle around 100 Operations per second so they hust can't handle all the IOPS from that multiple VMs running in parallel. SSDs can handle tenthousands or hundredthousands of IOPS instead. So you really want to use SSDs as your VM storage.

You also should considering installing a proxmox backup server becasue raid will never replace a backup.

Hello, a lot of thanks for the information,i am very confused now, i do not want to spend money and waste it . Then, my question is...

¿What use can i give to the 3TB HDD?

I can replace the actual SSD by enterprise SSD (two of them), then, which use can i give to the 1TB consumer SDD and the HDD? The R210 ii only have 2 bays for 3'5" or 4 for 2,5" with one adapter. I have learn a lot with your recommendations, thanks! then, if i buy 2 Enterprise SSD how many GB do you recommend? 1TB Enterprise SSD are very expensive... but i have found some one can i afford. i'am remembering to me all the time the purpose of the server is the center of smarthome, networking and routing and media server and not for professional use...

I have found some Micron SSD - 1TB M600 for 100€ each one...

I have 2 QNAP NAS which can do the backup function, and mantain their NAS function without add the NAS function to the server, then only the server will work with containers and VM.


It is still not clear to me how I can do it. The problem with the internet is that you find too many opinions.

Some tell you to use HDD disks for their write and read speed and the SSD as Cache. Others say to use SSD only, etc. In question what I do have clear is to have a redundancy system (In the documents I have consulted recommend the ZFS although it is more difficult to manage). I want a durable server but not overkill for the home system.

From these doubts arise all the questions I am asking... in your opinion, with the disk space of the R210II what setup would you do?

All SSD? What capacity for the SSD to the purpose i write above? For example if I get the two 1TB enterprise SSD, only these two would be enough for everything in ZFS mode? Would you use the Crucial SSD for another function? 2 x 1TB SSD will be too much for the use i want to give to it?

Your help is much apreciated!
 
Last edited:
If you already got two NAS for backups I don't really see a use for the consumer SSD and HDD. I would just use 2 enterprise SSDs as a ZFS mirror for boot/root storage + VM Storage. Or if you already got that adapter and maybe some old small consumer SSD that you don't need anymore I would use 2x 120GB consumer SSDs as ZFS mirror for boot/root and another 2x bigger enterprise SSDs as a ZFS mirror for your VM storage.
I personally like it if the VMs are not stored on the boot/root disks so it is easier to manage stuff (like destroying and recreating the VM pool without needing to install proxmox again) but its totally fine for a homeserver to use just the same pair of SSDs for everything.

How much space you really need only you can know. Proxmox will be fine with just 16-32 GB. And a ZFS pool should always have 10-20% of free capacity. So lets say you would buy 2x 1 TB SSDs, that would result in a mirror where 800-900GB are actually usable. Also keep in mind that ZFS supports thin provisioning. So if you create 5x 100 GB virtual disks but only write 10GB to each of them, they will not consume 500GB but only 50GB. And ZFS supports compression on block level. So here my VMs only use 33-80% of the uncompressed size.

Right now I'm running 29 VMs and they only need about 200GB of storage or 300GB with snapshots. I dont run Ubiquiti or home assistant but my OPNsense is using 5.59GB of real capacity (I installed it to a 32GB virtual disk). So I would guess two 200GB or 400GB SDDs would totally be fine for that. If you got a NAS you can also save stuff like ISOs, container Templates on that instead of proxmox server. And If a VMs uses alot of big files (like plex streaming your media) I would put the media on the NAS and use it inside the VM by using NFS/SMB shares.
 
Last edited:
Now i have the things more clear.

What do you think this setup?

2 x Crucial Consumer SSD of 250GB for boot in zfs and 2 x Kingston DC500M 480GB for VM storage?

Thanks a lot for the help
 
Its really easy to backup and restore VMs. Add a SMB/NFS share of your NAS as a backup storage. Use the buildin backup function to save backups to your NAS. Destroy your VMs. Destroy the pool and remove the drives. Add other drives, create a new pool for the VMs. Restore VMs from the NAS.

So you could for example first just buy 2 small cheap SSDs (or use anything you already got laying around) and install proxmox to it. And then use your 1TB SSD as a VM storage for testing. You can then setup all VMs and look how much storage you actually need. Then back the VMs up, buy some new enterprise SSDs, destroy your old VM pool, create a new one with the new SSDs and restore the VMs. You can also monitor your write amplification and writes per day. Maybe the 1TB consumer SSD will be enough for your workload. In that case you could just buy another 1TB consumer SSD and add it to your pools so your single disk pool will be converted to a mirror.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!