Need storage advice

bogesman

Renowned Member
Aug 7, 2015
13
2
68
I plan on using pm for our VM needs and i need advice on what storage model to use.
I will have 2 LSI hardware raids.
1 with hdd's and second with ssd's.

I want to make all pm nodes diskless and boot them from iscsi. I need that, because I want to be able to easy switch the nodes to another image.

Main question is. Should i install pm on the server with the hardware raids, add them as local storage and then cluster other pm nodes.
Or should I install only OpenMediaVault on that server and pm on the nodes. In that second scenario what would be best to use for storage.

My guess is that there are options, but I need the best possible output from the raids.
 
Last edited:
No one?
Hard or stupid question?
Ofc I will make tests, but needed some info to know what to look/aim for.
 
I plan on using pm for our VM needs and i need advice on what storage model to use.
I will have 2 LSI hardware raids.
1 with hdd's and second with ssd's.
Hi,
are this internal raids or an external iSCSi-Raid?
I want to make all pm nodes diskless and boot them from iscsi. I need that, because I want to be able to easy switch the nodes to another image.
How many nodes are "all pm nodes"?
Easy switch to another image sounds for me like big trouble with quorum/quota...
Don't know if booting from iSCSI work, but the pvesystem ist small and easy to install. Use an ssd for each server and your cluster run without trouble.
Main question is. Should i install pm on the server with the hardware raids, add them as local storage and then cluster other pm nodes.
Or should I install only OpenMediaVault on that server and pm on the nodes. In that second scenario what would be best to use for storage.

My guess is that there are options, but I need the best possible output from the raids.
Is this setup for home-use or productional?

I have some pve-nodes with internal raid-cards (hdd + ssd) which run DRBD on raid-slices on two nodes. Work well.
If you use an internal raid to boot the second node, you will get big trouble if the first node stuck...

Udo
 
are this internal raids or an external iSCSi-Raid?
2 internal raid cards

How many nodes are "all pm nodes"?
40

Easy switch to another image sounds for me like big trouble with quorum/quota...
Don't know if booting from iSCSI work, but the pvesystem ist small and easy to install. Use an ssd for each server and your cluster run without trouble.
Booting from iscsi works ok for linux and it's kind of tricky for windows, but i've done both before.

Is this setup for home-use or productional?
productional

I have some pve-nodes with internal raid-cards (hdd + ssd) which run DRBD on raid-slices on two nodes. Work well.
If you use an internal raid to boot the second node, you will get big trouble if the first node stuck...

Udo

Yes here is my biggest issue i haven't used pm before and I'm not very sure what is my best approach.
If the main file server should be pm node which will not be used for vm's, but only for the main shared storage. Or it should not be a part of the pm cluster and all pm nodes will use the storage via... iscsi or nfs or whatever is best. That's my question :)
 
2 internal raid cards


40


Booting from iscsi works ok for linux and it's kind of tricky for windows, but i've done both before.


productional



Yes here is my biggest issue i haven't used pm before and I'm not very sure what is my best approach.
If the main file server should be pm node which will not be used for vm's, but only for the main shared storage. Or it should not be a part of the pm cluster and all pm nodes will use the storage via... iscsi or nfs or whatever is best. That's my question :)
Hi,
depends on your workload, but normaly with virtualization the biggest bottleneck is IO. With 40 pve nodes you need/want an really realy good IO-subsystem.

And normaly it's should redundant storage (you can build such setup with zfs/opensolaris - mir is the expert for such a system if I'm right).

What kind of pve-nodes do you have? Why you try an diskless setup (blades?)?
Normaly with such amounth on nodes I would use an ceph storage system - every node got some disks (e.g. 3-4), perhaps a SSD for the journal (write speed; should be an good one like the Intel DC S3700).
With such an setup you have redundent storage which don't break if an single node die.

Udo
 
Nodes are blades yes. Their main purpose which is not virtualization has very low hdd usage, so ssd is overkill. Diskless boot mainly because of that + the option to remote switch the image to boot (other or proxmox) + easy clone/backup. I will look into ceph thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!