All-in-one Server

d1kiy

Member
Nov 17, 2020
10
0
6
33
Hello,

I recently purchased a server that I would like to use for virtualization and storage, and since I heard many positive reviews, I decided to use the proxmox as my main system. I have no experience using proxmox, so I would like you to point me in the right direction.

I have 4 drive bays, and initially I planned to install two SSDs in the ZFS mirror for proxmox and virtual machines, and two HDDs also in the mirror for storing files. But since I do not have a large number of files that I plan to store there, I decided to limit myself to only two SSDs for now, and I have questions about partitioning the disks.

Could you tell me how I should partition the disk to be able to use it for OS, VMs, as well as leave some space for storing files that will be used by a virtual machine with FreeNAS or OMV, and can I do this when installing the proxmox? At the same time, it is important for me to be able to add HDDs in the future, pass them to the NAS virtual machine, move all files there and free up space on the SSDs for other virtual machines.

I would be very grateful for the tips, since it would like to do everything right initially, so as not to suffer in the future.
 
Hello,

I recently purchased a server that I would like to use for virtualization and storage, and since I heard many positive reviews, I decided to use the proxmox as my main system. I have no experience using proxmox, so I would like you to point me in the right direction.

I have 4 drive bays, and initially I planned to install two SSDs in the ZFS mirror for proxmox and virtual machines, and two HDDs also in the mirror for storing files. But since I do not have a large number of files that I plan to store there, I decided to limit myself to only two SSDs for now, and I have questions about partitioning the disks.

Could you tell me how I should partition the disk to be able to use it for OS, VMs, as well as leave some space for storing files that will be used by a virtual machine with FreeNAS or OMV, and can I do this when installing the proxmox? At the same time, it is important for me to be able to add HDDs in the future, pass them to the NAS virtual machine, move all files there and free up space on the SSDs for other virtual machines.
The Proxmox installer can do all that for you. See here. With the "hdsize" field you can limit the size of the partitions so empty space is kept at the end of the drives. But if you want to use HDDs later for NAS data I would just use the full SSD size and use virtual HDDs and store them on your SSDs like all the other VM virtual HDDs. So you dont run into problems later with increasing the size of the existing partitions so the complete SSD is usable again.

Keep in mind that consumer SSDs aren't recommended for ZFS because, depending on your workload, they might get really slow or die within months.
 
Last edited:
  • Like
Reactions: Dominic and d1kiy
Are you want to passthrough the Disks? This is possible but in general not needed.

You can Install PVE on your SSDs with a ZFS Mirror, the will automatically add a local ZFS Storage where you can store the VM Disks. If you want to add two Hard Disks, then you can insert them to the Node and create a new zpool, you can then move the VM Disks with Storage Migration. If you okay with the VM Disks itself and you do not want to use passthrough, you do not have to configure more in the VM itself.
What you can do yet is to add two Disks to you VM, one for the OS and one for the Data Storage in the VM itself, so in the future you can move the Data disk to HDD and store the OS on the faster SSDs.

I would recommend to reduce complexity in the Setup. So i would not use passthrough, i would store all data on the VM Disk, if i need to move to a new Server, i can Backup the VM and restore it on another Node and this without any special configuration, withour problems or dataloss.

But keep in mind ZFS is okay for the OS to have a easy installed mirror for redundancy, but i would not use it as Datastore, the mainreason is the missing defrag feature. At around 80% fragmentation the storage will get in seriously problems and currently there is only one way to solve this, you need to add a new Set of drives to it and if you running out of bays, then a other solution is to Backup the VMs on an external storage, destroy the ZFS, recreate it and import the VMs again.
 
You can Install PVE on your SSDs with a ZFS Mirror, the will automatically add a local ZFS Storage where you can store the VM Disks. If you want to add two Hard Disks, then you can insert them to the Node and create a new zpool, you can then move the VM Disks with Storage Migration. If you okay with the VM Disks itself and you do not want to use passthrough, you do not have to configure more in the VM itself.
What you can do yet is to add two Disks to you VM, one for the OS and one for the Data Storage in the VM itself, so in the future you can move the Data disk to HDD and store the OS on the faster SSDs.
I would recommend to reduce complexity in the Setup. So i would not use passthrough, i would store all data on the VM Disk, if i need to move to a new Server, i can Backup the VM and restore it on another Node and this without any special configuration, withour problems or dataloss.
Easier yes, but it will cause a lot of overhead because of the virtualization and mixed blocksizes. But I don't know how much better pseudo passthrough using "qm set" is compared to that as it is still using some kind of virtualization by qemu. If you want a real passthrough, without virtualization but with real physical hardware access of your VM, you need a dedicated PCIe HBA for the two HDDs and pass that through using "PCI passthrough".
But keep in mind ZFS is okay for the OS to have a easy installed mirror for redundancy, but i would not use it as Datastore, the mainreason is the missing defrag feature. At around 80% fragmentation the storage will get in seriously problems and currently there is only one way to solve this, you need to add a new Set of drives to it and if you running out of bays, then a other solution is to Backup the VMs on an external storage, destroy the ZFS, recreate it and import the VMs again.
Didn't got any problems with that. If you don't use deduplication the drives don't get that fragmentated. My Pool is running for 1,5 years, got 17TB of data on it and fragmentation is still only at 3%.
 
Last edited:
The Proxmox installer can do all that for you. See here. With the "hdsize" field you can limit the size of the partitions so empty space is kept at the end of the drives. But if you want to use HDDs later for NAS data I would just use the full SSD size and use virtual HDDs and store them on your SSDs like all the other VM virtual HDDs. So you dont run into problems later with increasing the size of the existing partitions so the complete SSD is usable again.

Keep in mind that consumer SSDs aren't recommended for ZFS because, depending on your workload, they might get really slow or die within months.
Sounds like a good plan, and I probably will do it this way.
I'm going to use WD Red SA500 SSDs, so I think there will be no such problems with them.

Are you want to passthrough the Disks? This is possible but in general not needed.
I thought to create one more ZFS mirror in proxmox and somehow (how?) pass it to the VM.

But keep in mind ZFS is okay for the OS to have a easy installed mirror for redundancy, but i would not use it as Datastore
I did not quite understand why you do not advise using ZFS, while as far as I understand, data storage is its main purpose. How else should I mirror HDDs then?

Easier yes, but it will cause a lot of overhead because of the virtualization and mixed blocksizes. But I don't know how much better pseudo passthrough using "qm set" is compared to that as it is still using some kind of virtualization by qemu. If you want a real passthrough, without virtualization but with real physical hardware access of your VM, you need a dedicated PCIe HBA for the two HDDs and pass that through using "PCI passthrough".
Real pass-through is not an option for me as I don't have a separate HBA for it. How then would you advise me to mount HDDs to the VM with minimum overhead?
 
I'm going to use WD Red SA500 SSDs, so I think there will be no such problems with them.
They are still consumer SSDs without powerloss protection (so really high write amplification on sync writes because internal RAM cant be used for caching) and not very durable NAND (350 TBW compared to 10375 TBW).
I thought to create one more ZFS mirror in proxmox and somehow (how?) pass it to the VM.
No, you can't pass folders to VMs. That only works with LXCs. You can:
1.) create a zfs mirror on the host, create virtual HDDs on that and use those empty unformated virtual HDDs for the VMs. This way you get alot of overhead but atleast it is somehow backed by ZFS and you don't need to care about raid on the guests side.
2.) you pseudo passthrough the empty unformated HDDs to the VM. Your host won't be able to use the HDDs and no other VM because it is then exclusive for that one VM you pass it through. You can use any port on the mainboard and don't need to buy a dedicated HBA. But I call it pseudo passthrough because it is still some kind of virtualized, because a virtual SCSI controller is doing the translation from guest to physical drive on the host. If you want ZFS or software raid you need to use that on the guests side (installing FreeNAS or something like that).
3.) you use PCI passthrough to passthrough the complete SATA controller with every drive attached to it to a single VM. This way there is no virtualization and your VM can directly access the physical drives so you get no overhead. But your SSDs can not be connected to the same controller or they would be passed through too. And the HDDs are exclusive too. You wouldn't be able to use them for other VMs or the host itself.
If you want ZFS or software raid you need to use that on the guests side (installing FreeNAS or something like that).
Real pass-through is not an option for me as I don't have a separate HBA for it. How then would you advise me to mount HDDs to the VM with minimum overhead?
If you really want to minimize overhead and got a free PCIe slot, there are rebraded second hand LSI raid controllers supporting up to 8x SATA for just 35€. If you replace (flash) the rebranded firmware with the official LSI one and choose the "IT-Mode" firmware you get a great enterprise grade HBA for cheap.
 
Last edited:
  • Like
Reactions: d1kiy
They are still consumer SSDs without powerloss protection (so really high write amplification on sync writes because internal RAM cant be used for caching) and not very durable NAND (350 TBW compared to 10375 TBW).
How bad are they for a home server? Considering that in the future I still plan to add HDDs and use them for storing data and these SSDs will be used only for the host and VM.

No, you can't pass folders to VMs. That only works with LXCs.
Maybe I should then use LXC?

2.) you pseudo passthrough the empty unformated HDDs to the VM. Your host won't be able to use the HDDs and no other VM because it is then exclusive for that one VM you pass it through. You can use any port on the mainboard and don't need to buy a dedicated HBA. But I call it pseudo passthrough because it is still some kind of virtualized, because a virtual SCSI controller is doing the translation from guest to physical drive on the host. If you want ZFS or software raid you need to use that on the guests side (installing FreeNAS or something like that).
As I understand it, this is my only option with a minimal overhead if I do not have the opportunity to add a HBA card?

I have only one PCIe slot, which I do not want to occupy yet, since in the future I was considering the option of adding 10Gb network card, and I do not need so many disk slots, given that I now have only 30Gb of data (that's why I postponed the HDDs purchase) and I think this will take years until I need to store something close to at least a terabyte.
 
Didn't got any problems with that. If you don't use deduplication the drives don't get that fragmentated. My Pool is running for 1,5 years, got 17TB of data on it and fragmentation is still only at 3%.
Didn't use dedup, only 2x 120 GB WD Green for OS only and fragmentation is at 25%. Server was installed at the end of 2018.
For an Graphite Server we used 4x 1TB SSDs in ZFS striped mirrors they fragmentation was raised very fast, so we decide to change to an HW RAID and use LVM on top of it.
I work for an Datacenter an we use often ZFS Storages for storing Backups, its not a good solution for storing it. So at my own company we use CEPH with CephFS for storing Backups, more robust and scalable then ZFS it is and the mainreason, there is no fragmentation.

I did not quite understand why you do not advise using ZFS, while as far as I understand, data storage is its main purpose.
Please read my sentence in full as it explains why I do not recommend it. This "the mainreason is the missing defrag feature" is the important part of my explanation. If you aware of this and you start with only 2 drives but have 6 or 8 bays availble it might me okay. Because the missing defrag feature, you only have two options to solve it, add an additional set of drives or destroy the exisiting ZFS. In my opinion, neither of these options is a solution for me.

How bad are they for a home server? Considering that in the future I still plan to add HDDs and use them for storing data and these SSDs will be used only for the host and VM.
If you do not have an heavy workload on it, like a big DB Server or often storing Files (like daily Backups) this might not be a Problem. I would say, you can buy any consumer SSD and this should work for you without any Problems, but if you increase your workload on the Node you should consider to change the drives eg to Samsung PM or SM883.
But as @Dunuin say, normal consumer SSDs do not have some features like power loss protection and they are not really durable. A Samsung PM883 has a DWPW of 1,3 or a TBW of 1,3PB, a WD Blue 1TB SSD has only a TBW of 400TB. The pricing in Germany is not much different, the Samsung PM883 will cost around 40 - 50 EUR more but lasts up to three times longer.
 
If you do not have an heavy workload on it, like a big DB Server or often storing Files (like daily Backups) this might not be a Problem. I would say, you can buy any consumer SSD and this should work for you without any Problems, but if you increase your workload on the Node you should consider to change the drives eg to Samsung PM or SM883.
But as @Dunuin say, normal consumer SSDs do not have some features like power loss protection and they are not really durable. A Samsung PM883 has a DWPW of 1,3 or a TBW of 1,3PB, a WD Blue 1TB SSD has only a TBW of 400TB. The pricing in Germany is not much different, the Samsung PM883 will cost around 40 - 50 EUR more but lasts up to three times longer.
The write amplification also might be a problem. You get write amplification from VM to host (here on my homeserver its factor 7x) and your SSD will have a internal write amplification too (here factor 2.5-3x on enterprise SSDs). So in total I got a write amplification of factor 20x from the VMs filesystem to the flash chips inside the SSDs and my VMs only use a total of 170GB of that SSD pool . My VMs only write 30 GB/day (90% of that are logs/metrics written to DBs by graylog and zabbix) but because of the write amplification 600GB/day is written to the SSDs flash. And without an SSD with powerloss protection on sync writes you might not get factor 3x, it could be 10x or 100x so 30GB/day of sync writes could end up in 21TB/day written to the SSD. Thats why it really depends on the workload. If you are not running any DBs or other things that use sync writes that might not be such a big problem and the SSD may survive years.
The point is that you can't compare a proxmox server with a normal computer. Even if you don't write much data it can end up in extreme writes quickly wearing out the SSDs. I made that mistake myself and bought two Samsung Evos M.2 SSDs and needed to remove them some weeks later, because I saw that they wouldn't survive a single year.
 
Last edited:
@Dunuin @sb-jw thanks for your time and advices.

After reading all your recommendations, I canceled ordered SSDs and want to ask another question. Having an HPE Gen10 plus server with only 4x SATA (and 1 PCIe which I would rather not touch yet but if you think that it will benefit me, I could install a card with 2 NVMe there), what arrangement of disks and which filesystems would you advise me in order to get a virtualization server and NAS on the same machine and what kind of disks would you advise me so that they do not die soon?
 
@Dunuin and @sb-jw are quite right that for commercial or intensive use, you should purchase and use datacentre grade drives. However, for personal lab use, my experience is that conventional drives perform quite adequately for that sort of use case. For example, this is the current usage for my home lab nvme drive

nvme.JPG
This drive hosts all of my lxc containers (6) and all of my qemu vm's (5) and has been in 24/7 use for 9,500 hours (13 months). As you can see, the wearout is 4%, so unless it accelerates dramatically, I'm good for several years yet. It's a Sabrent Rocket NVME, so just a consumer device.

This is my SSD boot drive
ssd.JPG
Again, over 9000 hours and no major warnings

Finally, my viewpoint on the FreeNAS/TrueNAS debate. If the NAS aspect is the most important thing for you, and you need max performance - Install TrueNAS and run any VM's under TrueNAS. Otherwise, run TrueNAS as a VM with Virtual Hard Drives. Performance is not too shabby at all

cdm.JPG
This is a virtual TrueNAS providing an iSCSI export to a Window 10VM - not to bad at all considering the storage is actually on spinning rust and has two layers of virtual abstraction between them.
 
  • Like
Reactions: d1kiy
what arrangement of disks and which filesystems would you advise me in order to get a virtualization server and NAS on the same machine and what kind of disks would you advise me so that they do not die soon?
Could you explain us, what exactly you want to achieve? What do you want / need, more Storage, IOPS or a good mix of both?

If you want more Storage, then you need to use hard disks. If you have a Hardware RAID Controller with BBU and Cache, then i recommend to use a RAID 5, you will only loose the capacity of one drive but have a good redundancy and the XOR should not be a Problem, your HW Controller will handle it. Use only CMR Disks, if you use SMR make sure that your RAID Controller can handle it, otherwise a Rebuild can fail often or the Controller will refuse the Disk to use.

If you want more IOPS, then you need to use SSDs or NVMe (only if you want much more IOPS), therefore i would recommend to use Enterprise Disks. For SSDs i recommend to use a Hardware RAID Controller which can handle SSDs. If your HW Controller does not respect the SSDs, there will potentially destroy them faster as you want.

If you want a mix of both, you should use a combination of Hard Disks in the hot-swap Bays and NVMe or SSD m.2 drives / PCIe. If you can passthrough the Disks, you can use ZFS with caching on the faster Disks. But this is more complex then the other ones.

If you have 4x 3,5" Bays maybe the Seagte MACH.2 is a good Solution for you, but its not very cheap. You will get "two disks in one", you have only installed 4 hard disks, but your OS will see 8, because of the two actuators per Disks. But you need to make sure your configuration will not destroy your whole data if one disk dies, so maybe a RAID0 per one Disk is a solution and then make a RAID5 or Somethin over it.
Otherwise a SSHD could be a good way too, but there are often some downsides.

Generally: If you found a drive google it and check if there are some Tests / Benchmarks and check if you are okay with the downsides or not.

I personally would prefer to use the Seagate IronWolf Pro, if you have LFF instead of SFF. For the not commerical use a Server with 4 hard disks should have enough power of Storage and IOPS to handle a NAS and some VMs.
 
  • Like
Reactions: d1kiy
If the NAS aspect is the most important thing for you, and you need max performance - Install TrueNAS and run any VM's under TrueNAS. Otherwise, run TrueNAS as a VM with Virtual Hard Drives. Performance is not too shabby at all
For me, data security is more important in this case than performance. Are there problems with multi terabyte virtual disks?

Could you explain us, what exactly you want to achieve? What do you want / need, more Storage, IOPS or a good mix of both?
My server does not have a hardware RAID controller only HPE Smart Array S100i SR Gen10 Software RAID, so what you wrote is not entirely applicable in my case without upgrades.

What I want from this server:
1. Several virtual machines for personal use (NextCloud, Home Assistant, Bitwarden, etc.)
2. NAS for storing personal data (family photos, etc.)

As I said, I do not need a lot of disk space, it would be ideal for me to use a mirror of two SSDs for virtual machines and a mirror of two HDDs for storage. But as you already pointed out to me, I then face the problem that I have no optimal option to pass HDDs to a virtualized or containerized NAS.
Therefore, I am now thinking about adding a card for two M.2 disks (AOC-SLG3-2M2) to the only PCIe slot and use them for OS and VM. In this case, will I be able to pass the entire SATA controller to the NAS and avoid performance issues?
 
I've got a spare "AOC-SLG3-2M2" laying around. Used that with the two Samsung Evos that I needed to remove, because they would die within one year because of all the write amplification.

The "AOC-SLG3-2M2" has no bifurication chip build in. You need to make sure that your Mainboard supports bifurication or you won't be able to use two M.2 SSDs. Otherwise only one M.2 SSD will work in NVME mode and the other one is slow SATA mode.
You got bifurication if your BIOS allows you to change a PCIe 8x slot to be run in "8x" or "4x4" mode. Or a PCIe 16x slot to be run in "16x", "8x8", "8x4x4" or "4x4x4x4" mode. For the "AOC-SLG3-2M2" you need to run that slot in "4x4" or "4x4x4x4" mode.
 
Last edited:
  • Like
Reactions: d1kiy
You might find this interesting
https://youtu.be/S1smyTOlB4M

Whatever you do it will be a compromise, the only way you can avoid that is to have two servers, one for NAS and one for Proxmox
Thank you, I'll take a look.

I've got a spare "AOC-SLG3-2M2" laying around. Used that with the two Samsung Evos that I needed to remove, because they would die within one year because of all the write amplification.

The "AOC-SLG3-2M2" has no bifurication chip build in. You need to make sure that your Mainboard supports bifurication or you won't be able to use two M.2 SSDs. Otherwise only one M.2 SSD will work in NVME mode and the other one is slow SATA mode.
You got bifurication if your BIOS allows you to change a PCIe 8x slot to be run in "8x" or "4x4" mode. Or a PCIe 16x slot to be run in "16x", "8x8", "8x4x4" or "4x4x4x4" mode. For the "AOC-SLG3-2M2" you need to run that slot in "4x4" or "4x4x4x4" mode.
Yes, I have checked and the server supports 4x4x4x4. Do you think this option will be the most optimal for me? Maybe there are some NVMe SSDs that will not die so quickly?
 
Thank you, I'll take a look.


Yes, I have checked and the server supports 4x4x4x4. Do you think this option will be the most optimal for me? Maybe there are some NVMe SSDs that will not die so quickly?
There are some but 99% of them are U.2 and you need M.2 to U.2 cables.
mmenaz said:
Try Kingston DC1000M ('M' and NOT 'R'!!!) that is U.2 format (you need to buy a M.2 adapter or a pcie adapter), 960GB is around 300$, Power Loss protection and 3500 TBW (1DWPD / 5 years), I've the 1.92TB model, there is also a 3.84 one.
Maybe this is an option. Its one of the cheaper U.2 SSD.
 
Last edited:
There are some but 99% of them are U.2 and you need M.2 to U.2 cables.

Maybe this is an option. Its one of the cheaper U.2 SSD.
I also found Kingston DC1000B M.2 NVMe SSD and they seem to have power loss protection. What do you think?
And first of all, I would really like to know if this will solve my problem at all and if I can then passthrough SATA controller with all hard drives to NAS VM without overhead.
 
I also found Kingston DC1000B M.2 NVMe SSD and they seem to have power loss protection. What do you think?
TBW isn't great (0,5 DWPD) but atleast they got PLP.
And first of all, I would really like to know if this will solve my problem at all and if I can then passthrough SATA controller with all hard drives to NAS VM without overhead.
You can test that first as soon as your server arrives. You mainboard and CPU must support PCI passthrough. Here you can see how to check that.
 
  • Like
Reactions: d1kiy
Greetings everyone,
I know this might be an old thread but I'm seeking some assistance.. I have a dellr630 with a supermicro AOC-SLG3-2M2 PCIe. I've already completed the bifurcation option in the bios, completed the install of proxmox to an m.2 NVME but it always fails to boot from that. is there something I'm missing during the process or is it booting from NVME from that card is not possible. Your guidance is appreciated.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!