Using 2 storages and 6 nodes.

BinaryCrash

Member
Sep 22, 2019
11
0
21
41
Hello,

I am new to Proxmox, and i read it is possible to use a storage server and export it to nodes as iscsi or nfs, but i have two storage servers.

I would like all the 6 nodes to use the storage servers, but i would like to keep VMs running if one storage fails.
Is there a way to use (maybe HA) environment where i have not only node redundancy but storage redundancy also.
I was reading about storage replication, but it was confusing to me. I was looking for a real time replication.
Or if it is not possible then the backup(or copy) is stored on the other storage so i can bring it up.

To be more specific i have:
6 nodes ( HP DL360 G7 with 2x 10G SFP+ ports card and two SSD with RAID 1, for Operating System only)
1 (or 2 if needed) switch 16x 10G SPF+ ports.
2 storage servers (HP DL380p G8 with 2x 10G SFP+ ports, 12 HDD and maybe a pci-e card for nvme)

They are not configured yet, i will receive the hardware in a week.

It would be very cool to have VMs running in this cluster with also storage protection, and the VMs using thin-provisioning, but i really don't know how to achieve this.
Would you please help me answering if it is possible? And if so point me to the right direction?

I forgot to mention the second storage does have more RAM so i will host VMs there also, but maybe not at the first storage.
The 6 nodes will be in a HA group, restricted so it does not migrate to second storage CPU, and the second storage will be a single preferred node group so the VMs assigned there stay in this CPU if it available, but also migrate to the 6 node group when second storage fails.

Really appreciate the help.
 
You are in a unique situation where you have total control over your storage servers, is that right?

If you want to split your configuration:

You can create a DRBD-based storage system that replicates the data of the two servers and each node exports its contents via iSCSI to your cluster. On all your nodes, you will have to configure multipath and use the iSCSI LUNs directly to put LVM on top of them, then you will have a fault tolerant system. The built-in iSCSI system in PVE for each VM does not support multipath yet (at least to my knowledge), so you do not have fault tolerance. Also, an HA-NFS solution would be possible with multipathed NFS on a distributed shared storage cluster filesystem like GFS or OCFS2.
The NFS-solution would be the only one supporting thin provisioning via QCOW2.

Another route would be CEPH, but you would need three storage servers for that. Which brings me to the next question:
Who decided to get that specific hardware? Normally, you would go with CEPH and not with storage servers (except if you already have a HA-SAN).
 
  • Like
Reactions: BinaryCrash
You are in a unique situation where you have total control over your storage servers, is that right?

If you want to split your configuration:

You can create a DRBD-based storage system that replicates the data of the two servers and each node exports its contents via iSCSI to your cluster. On all your nodes, you will have to configure multipath and use the iSCSI LUNs directly to put LVM on top of them, then you will have a fault tolerant system. The built-in iSCSI system in PVE for each VM does not support multipath yet (at least to my knowledge), so you do not have fault tolerance. Also, an HA-NFS solution would be possible with multipathed NFS on a distributed shared storage cluster filesystem like GFS or OCFS2.
The NFS-solution would be the only one supporting thin provisioning via QCOW2.

Another route would be CEPH, but you would need three storage servers for that. Which brings me to the next question:
Who decided to get that specific hardware? Normally, you would go with CEPH and not with storage servers (except if you already have a HA-SAN).

Yes i have total control of my storage servers.
I had only one storage server, but then i decided to bring another one, so i was wondering if the second storage could bring me more fault tolerant options or to store the backup of the other storage, so in case of a disaster in one storage i could recover VMs backups saved on the other storage.

thin-provisioning is a must here.
 
Yes i have total control of my storage servers.
I had only one storage server, but then i decided to bring another one, so i was wondering if the second storage could bring me more fault tolerant options or to store the backup of the other storage, so in case of a disaster in one storage i could recover VMs backups saved on the other storage.

Always keep the time-to-recovery in your mind. HA is always better recovery with respect to time-to-recovery.

thin-provisioning is a must here.

Then you can only use NFS with QCOW2 and you have the hassle of setting everything up. CEPH would be the easiest method and only require the 6 nodes for computation.
 
  • Like
Reactions: BinaryCrash
Always keep the time-to-recovery in your mind. HA is always better recovery with respect to time-to-recovery.



Then you can only use NFS with QCOW2 and you have the hassle of setting everything up. CEPH would be the easiest method and only require the 6 nodes for computation.

I do not expect these storage servers to be down, but i would like to have the backups on the other storage for safety. Even if it have an impact on time-to-recovery in case of a storage critical failure.

Could you point me to where i can start reading about how to do with NFS+QCOW2 ? Never did it before, never used proxmox before.
Thank you.
 
I do not expect these storage servers to be down, but i would like to have the backups on the other storage for safety. Even if it have an impact on time-to-recovery in case of a storage critical failure.

Could you point me to where i can start reading about how to do with NFS+QCOW2 ? Never did it before, never used proxmox before.
Thank you.


This is not Proxmox issue this is storage issue NFS means Network File Server, so firstly learn this one also NFS have many special feature. QCOW2 is a virtual disk img file name not have more specific feature also you can not use QCOW some special feature on Proxmox any way..

NFS based system means you will be have two storage on your cluster so if one NFS server will be down you can not access that storage content. You will be use 6 Node for HA with standalone NFS serves this is very very useless..


So use GlusterFS, it is best for your topology.. You can create Replica storage for more important machine als you can create Distrubute Storage server for low important machine or you can use many many diffrent combination.... proxmox already have HA solition for GlusterFS and you can add your server to cluster system as a GlusterFS server.. Better than NFS,ISCSI or other anyone...


ZFS backend+GlusterFS+GlusterClient....

https://docs.gluster.org/en/latest/
 
  • Like
Reactions: BinaryCrash
This is not Proxmox issue this is storage issue NFS means Network File Server, so firstly learn this one also NFS have many special feature. QCOW2 is a virtual disk img file name not have more specific feature also you can not use QCOW some special feature on Proxmox any way..

NFS based system means you will be have two storage on your cluster so if one NFS server will be down you can not access that storage content. You will be use 6 Node for HA with standalone NFS serves this is very very useless..


So use GlusterFS, it is best for your topology.. You can create Replica storage for more important machine als you can create Distrubute Storage server for low important machine or you can use many many diffrent combination.... proxmox already have HA solition for GlusterFS and you can add your server to cluster system as a GlusterFS server.. Better than NFS,ISCSI or other anyone...


ZFS backend+GlusterFS+GlusterClient....

https://docs.gluster.org/en/latest/
Thank you very much!

With GlusterFS if one storage server is offline the VMs with replica storage keep working and online?
Can i still use thin provisioning with GlusterFS?

What about the backup and snapshots ? What do you recommend?
 
GlusterFS system support many diffrent system;

1. If you will build your GlusterFS system with 2 replica mode that means two server will work as RAID1 so what you have on RAID1 ? GlusterFS will support that. For any type splite brain problem GlusterFS have arbitter system and arbitter cost is zero means you can use any computer as a arbiter ( never store any data on arbitter only file name, create and access time, and directory structure. ). GlusterFS system also support Tier system it is means cold and hot data was supported, so you can use that system as a disk cache...
2. GlusterFS is a File based system; means you can not use that as a Block Based system so thin provision can not be support directly from GlusterFS but I was suggest to you with ZFS Backend, because ZFS have compression feature that means ZFS never write "0" to disk so ZFS+GlusterFS will support you about Thin Provision without any thin provision feature...
3. Backup is totaly diffrent issue, With Replica mode GlusterFS already have one backup for your data, but if you talking about Guest FileSystem Backup you can use Proxmox. Backup system already have sechedull.. So if anyone effect Guest operation system directly you can come back from Proxmox Backup system. ( proxmox Backup system work from Snaphot means backup not interrup your guest.
4.Snapshot ? if you talking about GlusterFS snapshot, file based system can not be do that but ZFS as a COW based Block system because of that ZFS snaphot not have any cost you can use that. I think you can found some document on internet.. If you talking about Guest Snapshot Proxmox can do that..


Redhat was suggest to this feature set for virtualization, this feature set for virtualization with qcow2 file but you can create many diffrent type VOLUME on GlusterFS for many diffrent type Job.

gluster volume reset VOLUMENAME
gluster volume set VOLUMENAME quick-read off
gluster volume set VOLUMENAME read-ahead off
gluster volume set VOLUMENAME io-cache off
gluster volume set VOLUMENAME stat-prefetch off
gluster volume set VOLUMENAME remote-dio on
gluster volume set VOLUMENAME eager-lock enable
 
  • Like
Reactions: BinaryCrash
What if i get another storage server, 3 storage servers in total?

6 nodes
3 storage servers

What would be the recommended? CephFS on the storage servers?
Does this give me fault tolerant cloud?
 
that is your selection :) I love GlusterFS because that is file based and repair or file recover so easy :).
 
What if i get another storage server, 3 storage servers in total?

6 nodes
3 storage servers

What would be the recommended? CephFS on the storage servers?
Does this give me fault tolerant cloud?

If you have the supporting network capacity/design then 3 nodes is the min that CEPH can safely run on.
 
  • Like
Reactions: BinaryCrash
If you have the supporting network capacity/design then 3 nodes is the min that CEPH can safely run on.

2 dedicated 16x 10G SFP+ ports switch for the proxmox servers.
Each server with 2 ports card.


Will the storage using cephFS on the 3 storage servers be accessible for VMs running in the 1 to 6 nodes?
How to do it?
 
2 dedicated 16x 10G SFP+ ports switch for the proxmox servers.
Each server with 2 ports card.


Will the storage using cephFS on the 3 storage servers be accessible for VMs running in the 1 to 6 nodes?
How to do it?

Correct, you need to setup Proxmox on all the servers add them as a single cluster. You can then do most of the CEPH setup within the GUI on Proxmox now.

However, if this is production id still suggest reaching out to a consultant or someone or paying for Proxmox Subscription Support to make sure your setup correctly if your not sure or used CEPH before.
 
  • Like
Reactions: BinaryCrash
However, if this is production id still suggest reaching out to a consultant or someone or paying for Proxmox Subscription Support to make sure your setup correctly if your not sure or used CEPH before.

Are you interested in a paid review?
I do plan to start an emulated environment to test the configurations until all the hardware gets here.

Or if anyone who read this... please let me know.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!