Need some installation support advices

jsieler

Member
Oct 18, 2019
21
0
6
38
Hello Community!

I'm new to Proxmod, worked with vSphere in my company and with bare metal with no virtualization. Some time ago i've got Dell T620 with PERC h710 Card and used it as my home storage and media server. Now i just want to install Proxmox and get best of it.

My current hardware:
Dell T620 + PERC h710 (will be replaced with h310 today and flashed to IT-Mode)
1x SSD (was used for OS Boot, Debian)
8x 2TB HDD (was in RAID6)

I've never user Proxmox or ZFS before. I see a lot of people are using ZFS and moving from hardware RAID to software solution. So, now i have few questions regarding all this stuff and how to solve it in best way.

1. There are few types of ZFS (Z1, Z2, Z3). I know about its parity, but, which one Proxmox use and which is more reliable for using along with virtualization and data/media storage?
2. Is it wise i install Proxmox to SSD and making HDD's as extra storage? Or it's better to install Proxmox directly onto HDD's?

Thanks for your help! :)
 
1. There are few types of ZFS (Z1, Z2, Z3). I know about its parity, but, which one Proxmox use and which is more reliable for using along with virtualization and data/media storage?
You can choose the disks and the zRAID level during the installation for the first zpool. If you want to split your storage into multiple pools you can create the other pools after the system ist set up and running. See our Documentation for that [0].

Which zRAID level to use depends on your needs of IO, the number of disks failing until the pool fails and your storage capacity needs. A definitive answer is hard to give. This article [1] from one of the ZFS developers can help in your decision.

Don't forget that you can also create Raid 10 like configurations by using mirrored VDEVs for the pool.

2. Is it wise i install Proxmox to SSD and making HDD's as extra storage? Or it's better to install Proxmox directly onto HDD's?
Depends on the SSD. If you only have one SSD you lack any redundancy in case the SSD fails. Also be aware that if it is a non enterprise SSD it could wear out quite fast with all the logs that are written to it.

[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_zfs
[1] https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz
 
Thanks for your reply. Those are just normal Samsung SSD (960 Evo if i remember right). So, it also makes more sense to take 2 SSD's as mirror, install Proxmox onto them and take the rest of HDD's for storage. Right?
 
my 2cents

if it is a single node (server / PC) then put it all on a SSD don't worry about ZFS (don't flame me) as it really wont help you.
If you have multiple SSD or HDD or a combination on a single node then dont bother about it.

If you have the option buy a cheap QNAP or similar box which will give you NFS and iSCSI storage.
The good thing is they are nearly bulletproof for a single node which gives you storage disk failure redundancy.
even a cheap one will give you raid 1 (2 drives) or raid 10 (4 drives) with no loss of speed (depending on your network but a 1 GB network is fine)
You can even assign a BACKUP for each VM daily which is great.


when you get more into it then you can try a 3 node cluster with 3*2 SSD drives using CEPH which runs well and gives you all the "death" redundancy you will need until you want a 7+ node server but that is a whole other story.

CEPH is when you understand that NFS and iSCSI is only for BACKUP
CEPH is a (SSD/HDD/NVMe) drive on all 3 nodes being mirrored in real time, not exactly fast but better than most for a 3 node system. About as fast a using a NAS (aka QNap or similar) unit BUT it has a failure covered for a node, I.e. 3 nodes and 2 nodes fail you are fine. If a NAS failes a drive you are fine but is a NAS fails totally you are the proverbally "F%^K*d" .

if it was me and you have a single node then select anything and have fun.

thanks
damon
 
don't worry about ZFS (don't flame me) as it really wont help you
I am curious why you think that. Given that ZFS has many features that make it interesting for a single node as well. Such as compression which, beside the savings in storage space, can speed up disk IO because CPUs are usually faster in (de)compressing data than the disks writing the uncompressed data. The checksumming of everything in ZFS helps a lot to avoid bit rot and will show problems more quickly than the SMART values of a disk in many situations.

If you have the option buy a cheap QNAP or similar box which will give you NFS and iSCSI storage.
Why add more complexity to the overall system if the PVE node has enough storage?
The good thing is they are nearly bulletproof for a single node which gives you storage disk failure redundancy.
Disk failure redundancy can also be had locally on the PVE node. With ZFS it is actually more "bulletproof" than the mdraid used in Synology and QNAP NAS systems (well okay, the enterprise QNAP NAS also offer ZFS).
with no loss of speed (depending on your network but a 1 GB network is fine)
A 1GBit Ethernet connection can be easily saturated with storage traffic, slowing down the VMs compared to local storage.
You can even assign a BACKUP for each VM daily which is great.
Have a look at the integrated backup solution of PVE https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_vzdump
 
Thanks for your reply. Those are just normal Samsung SSD (960 Evo if i remember right). So, it also makes more sense to take 2 SSD's as mirror, install Proxmox onto them and take the rest of HDD's for storage. Right?
This way the IO of the system and guests will not interfere as much. If the SSDs and the HDDs are on the same backplane that point is kinda moot though anyway.

If you attach the SSDs to the local SATA controller instead of the HBA you should get better IO overall.

Since they are not enterprise SSDs I am not sure how long they will last. Have a look on the wearout value in the SMART values.
Alternatively you could get two small Intel Optane SSDs, those should last and are not expensive if you get ~30Gb ones.
 
Why add more complexity to the overall system if the PVE node has enough storage?
Yeah, i'm not getting the point in having additional QNAP NAS. Having a 16 Bay Server and additionally getting a QNAP.... Why for? Makes no sense for me...

This way the IO of the system and guests will not interfere as much. If the SSDs and the HDDs are on the same backplane that point is kinda moot though anyway.

If you attach the SSDs to the local SATA controller instead of the HBA you should get better IO overall.

Since they are not enterprise SSDs I am not sure how long they will last. Have a look on the wearout value in the SMART values.
Alternatively you could get two small Intel Optane SSDs, those should last and are not expensive if you get ~30Gb ones.
Yes, they're no same backplane and will be connected to PERC h310 in HBA Mode. I will have to check what are two SATA ports on Dell T620 are for... (https://www.sparepartz.de/shop/media/images/org/Art0F5XM3.jpg). But i will probably connect SSD's to HBA too...
 
I will have to check what are two SATA ports on Dell T620 are for... (https://www.sparepartz.de/shop/media/images/org/Art0F5XM3.jpg). But i will probably connect SSD's to HBA too...
Apparently they are for the optional CD/DVD or Tape drives. I would give it a try if the system can be installed to the SSDs connected to them and also boot afterwards.

A last hint: Try to install in UEFI mode. This way the systemd-bootloader is used and not grub if you use ZFS for the root file system. It's the preferred way now because grub and ZFS are somewhat problematic under certain conditions, like enabling ZFS features that grub cannot handle.
 
One last question... Is it wise to install Proxmod to HBA connected storage? Seen some posts telling it's better not to do... But some of them are bit older.
 
So... I've spent few hours on testing and everything was without any success.

I've downloaded latest version of proxmox, flashed iso on usb stick and started install. Booted installation was in UEFI mode.

I've put 2x 1TB HDDs i wanted to use as RAID1 with ZFS file systen and 8x 2TB which should be used as storage pool which i wanted to configure later. All of those HDDs are connected to PERC h310 flashed in IT Mode.

So, after installation started, i've choosed RAID1 ZFS and selected two of my 1TB HDDs. Everything went well except last step. It told me that device is busy and can't be unmounted. Therefore failed.

Okay, another try just with a single HDD with ext4. Again error on last step, telling me that EFI can't be installed.

Okay, now with HDD connected to mainboard SATA port in ACHI mode. Same error as before with EFI.

Anything I'm doing wrong?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!