Newbie guide for installation partitions

Proxmoxbiff

New Member
Jun 14, 2021
4
0
1
44
Hi all,

Apologies for what is likely a stupid question, but I'm just considering installing proxmox for the below purposes and before i delve in to learning Proxmox i was hoping somebody could give me a quick steer please:

1- Homelab to study cybersecurity, including SIEM, IDS/IPS security onion, etc.
2 - NAS
3 - Plex server

My server consists of my old AMD 3950 gaming rig and purchased a 24bay rack case and 2x 8 port hba cards and 8x SATA (m/b onboard) which i intended to use as 4x ZFS z2 pools (6x drives each) and a 1TB m2 SSD which i thought i could use for the proxmox install and the VM drives?

Unless i am mistaken all the install guides i have been reading state requirements that you need 1x disk for Proxmox install, a seperate disk for the VMs and a third for the ISOs etc. which kind of ruins my future potential for a 4th z2 pool (i currently have 18 disks)

Is it not possible for me to use the 1TB disk for both proxmox and the VMs (backups to a ZFS pool)? I've also being seeing some nightmare stories about write amplification and seeing the VMs destroying consumer grade SSD's. Beginning to think UNRAID is probably a wiser choice at the moment.

Can anyone confirm if i can utilise the SSD for both hypervisor and VMs please? also if anyone has any good links or guides it would be much appreciated!

Many thanks!

G
 
Is it not possible for me to use the 1TB disk for both proxmox and the VMs (backups to a ZFS pool)?
Use can use the same drive for proxmox, isos and VMs.
I've also being seeing some nightmare stories about write amplification and seeing the VMs destroying consumer grade SSD's.
Yes, you should get a enterprise grade SSD. Then write amplification, performance and wear isn't big problem anymore. If you got the money for 16 HDDs 200-500€ for a good pair of enterprise SSD (you always want atleast a mirror when using ZFS if you dont want data to get corrupted) everything is relying on shouldn't be the problem.
Beginning to think UNRAID is probably a wiser choice at the moment.
That really depends. Unraid and ZFS/TrueNAS has its advantages and disadvantages. Unraid is nice because you can easily add more drives. ZFS got the benefit that it heals itself (bit rot).
 
Last edited:
  • Like
Reactions: Proxmoxbiff
That's great thanks Duniun!

OK so I've read lots of posts trying identify my next steps and now appreciate the that enterprise class SSDs is a must! :( however i cant find anywhere any M.2 NVMes that offer what appears to be a good lifespan rating. Intel Optane looks phenominal but doesnt support AMD :(

Can anyone recommend any m.2. drives and supplier (preferably UK based) that i can buy?

Really appreciate the support! I'd hate to have to go from proxmox to unraid because i cant find an M.2. disk :(

cheers!
 
If you want a good lifespan you need more NAND chips. Its SLC -> MLC -> TLC -> QLC. SLC is the most durable NAND flash but the less dense so you need more chips for the same amount of storage. QLC is the most dense but with the worst lifespan. And if you buy a 1TB SSD that doesn't mean that there is only 1TB of NAND cells used. Consumer SSD most likely will use something like 1,1TB of storage for a 1TB SSD and for a enterprise SSD that might be for example 1,7TB of storage. This extra storage that isn't usable is there as spare to increase the lifespan. And again, this needs space on the SSDs PCB. And enterprise SSDs got a powerloss protection so that the buildin RAM cache can be used for sync writes. This powerloss protection needs alot of space for all the capacitors that act as a backup battery to prevent dataloss/corruption in case of a power outage. Without this powerloss protection your SSD would be really slow and the write amplification would explode on server workloads like DBs that use sync writes because the cache can't be used to optimize writes.
Lets say a SSD can only write data in 128K blocks and you want to do a sync write of 100x 4KB. With powerloss protection the SSD will cache it, combine 32x 4KB to 1x 128K and write all 100x 4kb blocks as 4x 128KB operations so only 512KB are written. If your SSD doesn't got powerloss protection (with 99% change your 1TB SSD won't got it if its only a consumer/prosumer SSD) it can't cache and will write 100x 128K instead so 12800kb in total. So in one case 400kb are amplified to 512kb and in the other case to 12800kb.

All of this needs space on the PCB and M.2 just got a way too small footprint. There is just not enough space for all that stuff. Thats why most enterprise SSD will be PCIe cards or U.2 drives that come with a 2,5" or 3,5" form factor. If you got a M.2 slot and a free 2,5" bay you can buy a U.2 drive and a M.2 to U.2 cable so the U.2 drive can be used with your M.2 slot. There are also PCIe cards where you can mount your U.2 on but if you only got a consumer mainboard and you are already using two HBAs that need 8 PCIe lanes each you are most likely already running out of PCIe lanes so you can't add anything new like a GPU for plex/emby/jellyfin video encoding, a 10Gbit NIC or any additional HBA or PCIe SSD.

There are some enterprise M.2 drives (like the Intel P4511, Samsung PM983, Micro 7300 Pro, Kingston DC1000B) but all of these got not good lifespan (1000-2000 TBW for a 1TB). You can get second hand SSDs that were build for write intense workloads for 150€ that still got over 21000 TBW left but these are all SATA/SAS SSDs.

And I'm sure you will sooner or later replace your CPU/RAM/Mainboard. For virtualization you really want alot of RAM and PCIe lanes that consumer CPUs and chipsets just don't offer.

I dont know how big your drives will be but a rule of thumb for ZFS is 4GB + 1GB RAM per 1TB of RAW storage or even 4GB + 5GB per 1TB of raw storge if you want to use deduplication. Lets say you got 24x 4TB drives and a pair of 1TB SSDs. According to that rule your ZFS should use 102GB or 494GB of RAM. It will work with less but don't expect great performance then.
 
Last edited:
Once again, thanks again for the support, very much appreciated! That makes perfect sense thanks! the U.2. SSDs are far more affordable, however doesn't the intel optane offer these same functionalities on M.2 format? appreciate that doesn't help me much though :(

I've been looking for a U.2 -> M.2 convertor but unless i'm mistaken you can only get an M.2 -> U.2.?

***

yeah you're right its a consumer mainboard (Asus ROG Crosshair VIII Hero Wi-Fi) - I've used both PCIe slots with the 8 port HBA (pcie x8) cards and the 8x onboard SATA so i need to use the m.2. just using an old GTX 760 GPU at the moment, not got to the transcode question just yet as i use an NVidia shield so i don't think i need to at the moment although some of my other TVs will likely need it. Kind of burying my head in the sand at this point and hoping i can get over that bridge (pun intened) when i get to it :) lol!

ATM I have 6 bays free which i had intended for future proof ZFS-z2 pool, i guess i could sacrifice a SATA port, but the more i read, this many TBs of ZFS requires far more memory than i was expecting - the proxmox guides suggests ~1GB per TB of storage which by the time i finished suspect i would be in excess way over 100TB raw. I guess that's just for optimal performance but at 32GB RAM i think i may struggle with some VMs Too.

i think this project has got a bit carried away at this point as i started off thinking i can use old hardware to run UNRAID then figured I'd prefer to learn proxmox and loved the idea of the ZFS bit healing so upgraded my hopes! (i encountered a RAID 5 URE a while back so learnt that lesson quickly :( had a backup though :) ) i did consider a used dell r710 and disk shelf or something but they are far too noisy for where they are stored so had to just go consumer :(

Once again many thanks for all your support mate! very much appreciated! definitely learnt a lot!
 
Once again, thanks again for the support, very much appreciated! That makes perfect sense thanks! the U.2. SSDs are far more affordable, however doesn't the intel optane offer these same functionalities on M.2 format? appreciate that doesn't help me much though :(

I've been looking for a U.2 -> M.2 convertor but unless i'm mistaken you can only get an M.2 -> U.2.?
I mean something like this...
5892ec37c13636.79490657.jpg5eafd5d7899649.12402447.jpg
...to connect a U.2 SSD to a M.2 slot on your mainboard. But could be problematic if you got a server case where all drive slots are connected to a backplane so you are forced to use what your backplane is supporting.

***

yeah you're right its a consumer mainboard (Asus ROG Crosshair VIII Hero Wi-Fi) - I've used both PCIe slots with the 8 port HBA (pcie x8) cards and the 8x onboard SATA so i need to use the m.2. just using an old GTX 760 GPU at the moment, not got to the transcode question just yet as i use an NVidia shield so i don't think i need to at the moment although some of my other TVs will likely need it. Kind of burying my head in the sand at this point and hoping i can get over that bridge (pun intened) when i get to it :) lol!
Are you sure that you aren't already running out of PCI lanes? You got three mechanical 16x slots running electrically 8x + 8x + 4x and a mechanical 1x slot thats electrically 1x. So depending on where you put your GPU your HBAs might be slowed down already. You want the HBAs in the top 2 slots so they can use the full 8x with your GPU in the electrically 4x slot. But I guess you can't use the GPU in the bottom slot because it is a dualslot GPU and don't fit that way inside the case?
If you use the HBA in the bottom slot its speed will behalved if it need 8x and only got 4x.
ATM I have 6 bays free which i had intended for future proof ZFS-z2 pool, i guess i could sacrifice a SATA port, but the more i read, this many TBs of ZFS requires far more memory than i was expecting - the proxmox guides suggests ~1GB per TB of storage which by the time i finished suspect i would be in excess way over 100TB raw. I guess that's just for optimal performance but at 32GB RAM i think i may struggle with some VMs Too.
Yeah 32GB is way not enough. My TrueNAS servers are working fine with just 16GB and 32GB of RAM but they only got 32TB of storage. And my Proxmox server is running out of RAM again (where 54 of the 64GB RAM are used by my VMs). With that much storage you really want to have atleast 96GB RAM.

And ECC RAM is recommended by the way. ZFS can't help you with corrupted data if it gets corruped while in RAM before it is stored to the ZFS pool. In the last 2 years 3 of my 5 RAM modules of my gaming pc died slowly. Because that pc doesn't got ECC RAM these RAM errors were unnoticed for weeks and most of the stuff I've written to my NAS got corrupted. This was super annoying. My pc was killing my data slowly for weeks and I only noticed it weeks later as the bluescreens began to start showing up. And I got that 3 times with each module failing...
Now I'm only using ECC RAM, even in my new gaming pc. If this happens again and RAM is corrupting stuff it will be fixed automatically by the ECC and if not I atleast get a notification that RAM errors were detected so I can fix it before it corrupts too much.

And if you want to use such big HDD pools you also might want to consider buying some additional enterprise SSDs. For example 3x 500GB SSDs as a three-way-mirror to use them as a zfs "special device". This will speed up stuff because metadata will be stored on the fast SSDs and not on the slow HDDs. And you really want 2 or better 3 of them because if they degrade all data on all 24 HDDs will be lost. In some situation SSDs for L2ARC and SLOG could be usefull too for read/write caching.

And raid never replaces a backup. With such a big pool it would take forever to backup everything only using a single 1Gbit NIC. So you might want to upgrade to 10Gbit or 40Gbit NICs later. Here you will need another 4x or better 8x PCIe slot. Its stupid if your NAS could read/write with 1-2 GiB/s but your NIC can only handle 118MiB/s so everything is slowed down. There are 10Gbit NICs on the second hand market starting at 35€.
 
Last edited:
  • Like
Reactions: Proxmoxbiff
i cant say anything for performance but its booting fine with all HDDs pluggin in. not in an array or anything just yet though.

thanks again mate! youve really helped me solve the immediate problems and rule out proxmox, shame, but unavoidable.
 
You could look for some server workstation boards. For example last week 2 identical ebay auctions ended that I was watching.
It was a Supermicro X10SRA-F Mainboard (16x + 8x + 8x + 8x + 1x + 1x PCIe lanes, 10x SATA, 2x Gbit NIC) + 64GB DDR4 ECC RAM (4 free RAM slots so it could be easily upgraded to 128GB), a CPU cooler and a XEON E5-2603V3 (really bad 6 core 6 thread CPU that I would replace). For 223€ that isn't a bad deal.
There also was a Supermicro X9DRI-F (dual CPU, 16x + 16x + 16x + 8x + 8x + 8x PCIe lanes, ) + 2x Xeon E5 2630 V2 (so 12 cores 24 threads in total) + 96 GB DDR3 ECC RAM (still 8 free RAM slots for later upgrades) + two tower coolers for 212€.

Especially the last one is great for the money and you won't run out of RAM slots or PCIe lanes ;)

Just watch some auctions and look if there is something interesting being sold in your country.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!