[SOLVED] From ESXi to PROXMOX

Nov 19, 2020
89
15
13
25
Hi!

Before i jump into completely new world of virtualization (for me of course) please a need a hint that my choosen hardware will play fine with Proxmox. I need to replace my 6 year old homelab (ESXi) server (i5, 32 gigs of ram, hw raid with 6 drives..). Nothing special, but it runs well for all that time. I wann't to replace it with this new sexy small combo which seems almost ideal for me:
That would be nice replacement for my current system and to replace ESXi with PROXMOX. But ... MB and chasis are so small that i cannot put any decent RAID card into (because of overheating problems, which may arise). Instead of that, i will user M.2 drive for system and use ZFS (RAIDZ) for SSD drives for their entire capacity. I choose RAIDZ because of smalest space tax - on MB there is room only for 4 SATA drives. So please anyone who can help me with tips & strategy of how to prepare / format SSD's (best alignment options for that particular SSD drives) and how / where to store ZFS ZIL & logs. Can system M.2 drive be used for ZIL? Since that is home server i do not need any form of heavy logging. I just need a stable system. And yes, i hope that i will end with SSD drive speed better that of current system (old Areca 1231ML RAID card) which is from 150 to 230 Mb/s. I know ZFS is software 'raid' system, but it is real to expect that? Since drives are SSD and whole system is newer i hope that is not so unrealistic.

Thanks to anyone who will find time to reply me :) Stay safe & healty!
 
Last edited:
Samsung Evo arent maybe good enough for enterprise,but i guess for your use case they should work. Maybe set them up in raid10 in ZFS?
ZIL shouldn't be on one drive,recommendation is that it is set on mirrored drives.
Everything else looks okay, 64gb will be good enough.
 
  • Like
Reactions: GazdaJezda
I have no direct experience of using proxmox with that hardware but I would be very surprised if there were any issues. I do have a proxmox installation running on a similar xeon based system in the same chassis and that works very smoothly and reliably.

There is a bit of a debate on how much benefit can be had from ZIL/SLOG caches when using SSD pools but it's quite practical to place these on partitions on the NVME drive but others may have a different opinion. If performance is key you might gain more from using raid 10 at the expense of useable space. As always with these things, there is rarely one correct answer.

Depending on your expectations regarding lifetime and your usage pattern, you may well be advised to consider datacentre-grade SSD's

Hope this helps, let us know how you get on :)
 
  • Like
Reactions: GazdaJezda
Thank you both for your fast answers. I will post here about 'build process' of that my new home server. If i'm honest i would like to throw a hw Raid controler into, set a volume and forget about it. Or maybe i find some SATA port expander or similar solution so i can use 8 drives, that would be perfectly enough. After migrating to that new server, old will be powered off so i need at least 4TB on new one, but i hope for some TB more. That's why i'm targeting to RAIDz pool.

Bob: as i read, consumer 860 Evo should be ok regarding durability (they have quite high TBW) so i hope on some 5 year work without problem, then i will replace them or a whole server (like now). I'm awarr of powerloss potential problem but it sems, i guess i will take that risk. Crucial stuff will periol0)(+/; be backed to external hard drives.
 
Last edited:
You only need a HBA for ZFS, not a full blown raid controller with cache ram and bbu. The performance off the motherboard SATA ports should be good enough and a cheap dual port PCIe SATA card will give you the capability of 6 drives in that chassis. If you want to get every last ounce of performance from the disks, then an 8-port HBA card at either 6Gb/s or 12Gb/s should be fine as they don't run particularly hot are therefore generally passively cooled

One of the advantages of ZFS is the portability, so you can build the system now using the motherboard SATA ports and if you decide you want to move onto a HBA later, then you can just export the pool (takes seconds), change the hardware - reconnect the drives, power up, import the pool and you're back in action.

ZFS also makes it pretty easy to do backups as well - even to external drives - can you tell I'm a fan of ZFS :)
 
You only need a HBA for ZFS, not a full blown raid controller with cache ram and bbu. The performance off the motherboard SATA ports should be good enough and a cheap dual port PCIe SATA card will give you the capability of 6 drives in that chassis. If you want to get every last ounce of performance from the disks, then an 8-port HBA card at either 6Gb/s or 12Gb/s should be fine as they don't run particularly hot are therefore generally passively cooled
Great! I think that will be. From the beginning i will go with 4 drives and RAIDz pool.
One of the advantages of ZFS is the portability, so you can build the system now using the motherboard SATA ports and if you decide you want to move onto a HBA later, then you can just export the pool (takes seconds), change the hardware - reconnect the drives, power up, import the pool and you're back in action.

ZFS also makes it pretty easy to do backups as well - even to external drives - can you tell I'm a fan of ZFS :)
So when i will need more disk space i need to install a HBA card (with 8 port would be the best option), reconnect drives, modify (resize) pool and that's it? That sounds perfect :)
 
So i will first need to copy all data somewhere else then destroy, recreate and after that copy data back? Ai :( That seams reason for RAIDz to be used from the start (1 disk redundancy, better that nothing) and i will hope to the best. That way there will be some additional free space available comparing to current server and all will be SSD. Which i hope it will work ok.
 
Last edited:
Online raidz resizing(as in adding new disk) will probably be a new feature around the end of 2021. You can of course,replace the disks with bigger ones and grow the pool
 
Ok, great. So from the begining i will go with 4 2TB disks and later there will be a way to expand. I think this is good enough for me. If speed of such volume would be higher than my current system it would be perfect. Now i need to buy components and operation New home server will startz :-)
 
I do have a proxmox installation running on a similar xeon based system in the same chassis and that works very smoothly and reliably.

Bob: i overlooked that you have same chasis. How loud it is, very? I mean the back vents? How many you have inside? Since mine motherboard will come with passive processor cooling i expect some problems can arise because of that. Thanx!
 
@GazdaJezda I can't really answer the noise question because it is running in a noisy plant room at a remote office for the company I work for. The plant room has no aircon and is not well ventilated but we have had no issues in more than two years
 
Yupiiii! Progress bar is on 50% - ProxBox have just arrived :) Now i will order disks and then i will bother here again about best-install-practice :)
 

Attachments

  • prox-box.jpg
    prox-box.jpg
    73.9 KB · Views: 41
Last edited:
For system itself (bootable VE), please i need a suggestion which M.2 drive (how big) will be enough for logs and ZIL, so data SSD's would not suffer too much? Is there any affordable enterprise grade model? 10x
 
Bob: i overlooked that you have same chasis. How loud it is, very? I mean the back vents? How many you have inside? Since mine motherboard will come with passive processor cooling i expect some problems can arise because of that. Thanx!
I have a couple of these chassis that I use for various tasks. I don't have the X11 motherboard, but run the Supermicro Xeon-E 1540's. I can't vouch for your system specifically, but as soon as my systems starts do actually do anything (besides just being switch on) they get very hot! I've been trying
and tinkering for quite a while to get these system as quiet as possible and I have realized that it is possible but only to a certain extent.

The cooling requirements increases as the CPU/system gets hotter. Alas, if the fans you have is not providing enough airflow, you will burn your CPU. Or something else. In the end, I realized I had to install the chassis fans (3 of them) to make sure the box would stay stable, as I wouldn't want it to simply shutdown and potentially destroying data on the HDD's/SDD's or damage any other components.

These systems are really nice, but they are not designed to be quiet. If you only have a passive cooler on the CPU, you have to get the chassis fans!
 
  • Like
Reactions: GazdaJezda
Elmo: thanx, that's good to know. I also bought 3 fans which will be in box, i hope it will be enough. Now it's done, since only discs are missing and asembling of the box will begin. Then i will take a picture and post it here for review :)

bobmc: thanx, i will order that one with EVO's tommorow. Also, when that come, i will post here and beg for further help :)

Thank you guys! Stay safe!
 
bobmc: unfortunatelly, suggested HYNIX M.2 drive would come only next year (january, 2nd) to me, so i order other drive form our local store, which will come tomorrow. So i can plug-in & at least install VE on it and get familiar with it, til SSD's will come.

- https://www.samsung.com/us/computin...drives/ssd-970-evo-nvme-m2-500gb-mz-v7e500bw/

I know it isn't HYNIX, did i made a shoot in the dark with this one? Tom also have few good words for it.
 
  • Like
Reactions: bobmc

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!