Cluster storage for a two-node primary/secondary case

mbaldini

Well-Known Member
Nov 7, 2015
170
22
58
Hi.

I'm building a two node proxmox setup for a scenario I call "primary/secondary", that means all the VMs (windows on kvm) are active on one node, the secondary node is there with the VMs always turned off, they should be turned on MANUALLY only if the first node has problems, the VM storage should be obviously in sync.

I can use a third proxmox installation just for quorum purpose, if needed.

Hardware I'm running on is low budget, 2xHDD but no hardware raid controller, usually I use ZFS raid1, 1xSSD I can put proxmox on, Intel core i7 processor, no ECC RAM.

I'm using proxmox usually in non-cluster configurations, so I don't really have a lot of knowledge about cluster best practise.

In this case what could be the best storage to use (I can't use SAN or other expensive technology), i read a lot about ceph, gluster and DRBD but I can't understand what could be better in my case.

Thanks
 
You could also use a dedicated storage server. I would recommend using a storage server based on ZFS and the use the ZFS_over_iSCSI plugin. For this setup a Solaris based storage server is preferable.
 
You could also use a dedicated storage server.
Customer won't pay for a third server, I can only use a low performance third server with proxmox installed just for quorum if needed, but not good hardware to use as a storage server
 
With the disks available in the nodes I would drop the project. You will never get any decent performance with 2xSATA even added a SSD for journal.
 
Thanks, but I can't drop the project, the customer want this.

My current proxmox installations are all with zfs raid1 2xSATA HDD, only the latest one have 1xSSD added for journal and cache, and they perform good.
Some of the older ones are running with proxmox 3.3 and lvm on top of mdadm raid1 2xSATA, and still are working nice.

Are storage filesystems so worst in performance than zfs?
 
My setup should have two server with a replicated file system, and I read that ZFS is not suitable for this setup. So I read about ceph, gluster and drbd cluster storage, I'd need I cluster storage system that can support multiple (2 in my case) disks in both hosts.
Reading some tutorial, I think ceph will be the solution I must use, I can setup multi physical disks shared among both hosts.
 
For your setup with a "cold" standby, there is pve-zsync on ZFS which should exactly do as you like:
https://pve.proxmox.com/wiki/PVE-zsync
That could really be what I need, I'll do some tests to see how it works but reading the manual seems that is right for my scenario. I can keep the storage of the two machines in sync (well, 15-20 minutes of differences are not a problem) and still keep the great advantages of ZFS (software raid + arc cache + l2arc on ssd + log on ssd)
Thanks for the great suggestion, I did not know pve-zsync

I generally agree with @mir about performance. I'll never get why people are running VEs on two spinning disks - even with slog on ssd.
ZFS with raid1 is not good? What could be better for a low budget (no hw raid controller) system?
 
The bare minimum in my opinion would be 4 disk + a fast SSD (read datacenter grade) for log in each node (RAID 10).
Eh, I'm actually at budget limit with 2xSATA HDD plus 1 Samsung 850 Pro SSD. And it's working very well


Don't blame ZFS if your system is dead slow, because it'll be. Two slow SATA disks will not yield good performance. Technically it is fine!
I don't blame ZFS, in fact I have nearly 10 proxmox servers with ZFS 2xSATA HDD in RAID1 with good performance, in only the 2 latest servers I put a SSD (Samsung 850 Evo) for cache+log and performance is great.
Usually I only install 2 virtual machines (win2003+win2012), in some of them there are some others VMs (some other windows 8/10, mikrotik routeros + dude), and I don't have a single problem with performance.

I was asking for a solution for a cluseter of two nodes, in which one node is always in "standy", it only syncs the storage, and if the primary node fails, the VMs in the secondary can be manually started to let the work continue with minimal downtime.

I was thinking about a cluster filesystem, but your suggestion about pve-zsync is very good.

I don't know why you and @mir stated about the slowness of ZFS or raid1, I find it very good for my needs, so I asked if according to you there could be a better solution with the hardware I can use, and it seems I am already using the best solution. Thanks again
 
2 VMs? Yeah, you will not have any problems. Maybe you do not need Proxmox VE, VMware or anything else. Just install 2008 with Hyper-V for the one and only other machine and you're good. You'll have better ROI and simpler environment.
 
Sorry but I can't really understand you. Why switch to another hypervisor if I have 16 other Proxmox installations and I know it well (at least, for what I need to do). I try to make my installations the more similar that it's possible.
For this customer it's the first time I need a more complex setup, and it seems to me that proxmox can handle it very well, I'm setting up in laboratory 2 proxmox server to try pve-zsync and do some tests, but it seems to me that your suggestion is the right way to go for me.
In the meantime I'm going to try a ceph storage too, doing some performance tests (I usually use pveperf and crystaldiskmark on windows to test the performance of the VM). But I don't really think I need that.
 
Last edited:
You don't really need so much complexity for just 2 VMs. The main idea of virtualization is to be able to use resources more efficiently, therefore you consolidate more machines on fewer hardware, while increasing availability and manageability.

But for wrap up: For your very simple setup, ZFS on two disks for two VMs is enough.
 
You don't really need so much complexity for just 2 VMs. The main idea of virtualization is to be able to use resources more efficiently, therefore you consolidate more machines on fewer hardware, while increasing availability and manageability.
Yeas, that's right, in fact in some installations I have 6-7 VMs, but no more. However I think that if the customer needs will eventually grow, a virtual server can grow in the future easier than adding physical servers. And a virtualized environment IMHO is better even for other tasks other than resource management, like backup/restore (pve integrated backup/restore is great, even if not good as veeam for vmware), migration of VMs to a new server, snapshot/clone, and so on.

But for wrap up: For your very simple setup, ZFS on two disks for two VMs is enough.
I'm happy that you can confirm that my choice is good. Thanks
 
Hi mbaldini,

unfortunately DRBD is not production ready yet, otherwise I would recommend it to you. For small (two-node, not many disks) setups it would be the choice No. 1 for me personally, but it's not usable yet. Ceph and gluster are not applicable in your case because they always need three nodes at least. The proposed third storage server with iSCSI/NFS would indeed be a good option too because you avoid all syncing stuff, but on the other hand it would not perform quicker in any way than DRBD if you use the same TCP/IP connection for it and you get another single point of failure.

The main question is: Is it acceptable for you not to have a 100% hot copy of the VMs? With pve-zsync you only perform a regular sync on a defined time base while Gluster/DRBD/Ceph always perform synchronous writes on all nodes and wait till all write operations have been confirmed.

Regarding the performance of ZFS on only two disks, I can't give any first hand experience yet, but I would agree that it should work in small setups too. The question is always what you put on it and what you expect of it. It will never be comparable to a huge setup with 8 or more 10k SAS disks and a lot of cache on SSDs - but it does not have to be. Of course it can not magically become better than the hardware on top of which you build it up, but there are many other ways to really screw your configuration up and lose a lot of performance ;-) For some really nice advice see: http://open-zfs.org/wiki/Performance_tuning

I hope I did neither confuse you further nor give any wrong information - in either case correct me or ask further.

Cheers, Johannes
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!