One target iSCSI - Two Proxmox VE 3.1 servers - HA ?

superwemba

New Member
Oct 14, 2013
16
0
1
Good afternoon everybody.

1-] Apologies for my bad english (I'm french) :p

2-] Since approximatively one week I'm testing Proxmox VE 3.1. I want to determine if Proxmox can do "HA" with just only one target iSCSI and two Proxmox servers as initiators.
For this, I want to store my VM on iSCSI shares (between the two proxmox servers) and if one Proxmox fails the second takes over the VM.

3-] I've seen lot of solutions ( like here -> http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster) about two nodes and DRBD solution but i'ts not realy that i want to do. I don't want to use DRBD but it's mentioned (System
requirements) that I can do what I want with a SAN network. iSCSI is a SAN network.

4-] I tested ocfs2 and gfs2 on standard debian wheezy and it works perfectly but not on Proxmox kernel. I know why ocfs2 doesnt work but I tried gfs2 and it seems that it doesnt work too. Why ?

It would be nice if people can tell me if one target iSCSI and two proxmox servers as initiators is a possible configuration for doing HA.

Thank you very much

superwemba
 
Hey...

1] No need to apologise!

2] I have a similar setup to you (however I have a third "dumb" node that just sits around to help with Quorum) however I followed this:
http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing

and it seemed to work for me.. I can define HA in a server and if one of my nodes goes down the VM migrates.

If your using a SAN you will need to configure multipath (see: http://pve.proxmox.com/wiki/ISCSI_Multipath ) and also have fencing configured (see http://pve.proxmox.com/wiki/Fencing ). For fencing I am using an APC 7921 PDU which seems to work well and is documented.

If you can get your hands on another box, it doesn't have to be powerful, just to assist in Quorum & Fencing i'm sure it will save a lot of heart-ache..

4] These different fs surpass my knowledge.. sorry!


Hope that helps?
 
Hey thank you very much Extcee !

I have just one question, why do you need to use multipath and where do you install that ?

thank you so much

superwemba
 
From the wiki as this will explain better than I: Main purpose of multipath connectivity is to provide redundant access to the storage devices, i.e to have access to the storage device when one or more of the components in a path fail. Another advantage of multipathing is the increased throughput by way of load balancing. Common example for the use of multipathing is a iSCSI SAN connected storage device. You have redundancy and maximum performance.

Follow: http://pve.proxmox.com/wiki/ISCSI_Multipath and the config for your Dell SAN will be http://pve.proxmox.com/wiki/ISCSI_Multipath#Dell
http://www.dell.com/downloads/global/power/ps3q06-20060189-Michael.pdf
 
Hey Extcee thank you my HA cluster is running ! (With manual fencing)

So I have a last question: My vm migrates so fine but, they restarts and I want to know if it possible to have no interruption ?

superwemba
 
Congrats :)

Are you migrating online?

I just migrated one one of my VMs from Node1 to Node2 online and the log show:

<code>
Oct 19 21:47:32 starting migration of VM 401 to node 'proxnode2' (10.10.1.52)
Oct 19 21:47:32 copying disk images
Oct 19 21:47:32 starting VM 401 on remote node 'proxnode2'
Oct 19 21:47:34 starting migration tunnel
Oct 19 21:47:35 starting online/live migration on port 60000
Oct 19 21:47:35 migrate_set_speed: 8589934592
Oct 19 21:47:35 migrate_set_downtime: 0.1
Oct 19 21:47:37 migration status: active (transferred 91897711, remaining 3602280448), total 4303814656)
Oct 19 21:47:39 migration status: active (transferred 171642702, remaining 3518902272), total 4303814656)
Oct 19 21:47:41 migration status: active (transferred 314050024, remaining 3369996288), total 4303814656)
Oct 19 21:47:43 migration status: active (transferred 446878032, remaining 2847002624), total 4303814656)
Oct 19 21:47:45 migration status: active (transferred 544398863, remaining 2748956672), total 4303814656)
Oct 19 21:47:47 migration status: active (transferred 642870823, remaining 2648551424), total 4303814656)
Oct 19 21:47:49 migration status: active (transferred 657151644, remaining 2522980352), total 4303814656)
Oct 19 21:47:51 migration status: active (transferred 699390896, remaining 7380992), total 4303814656)
Oct 19 21:47:51 migration status: active (transferred 716385181, remaining 0), total 4303814656)
Oct 19 21:47:52 migration speed: 240.94 MB/s - downtime 65 ms
Oct 19 21:47:52 migration status: completed
</code>

The VM was "paused" for a few ms but resumed straight away.

Are yours shutting down? Have you got acpi enabled? What is the guest OS? Is this on all VM's? Can you output a log?
 
Hi Extcee,

The migration of online vm work, i have no interruption but if my nodeA crash the vm migrates to the nodeB however, in this case i have an interruption and the vm restarts on the nodeB (it is already well).

I want to know if it's possible to have no interruption when the nodeA crash ? what happens in this case in your configuration ?

Thanks

superwemba
 
I think you can't have that with hardware HA: if you search for continuity, you should setup 2 vm, one on each node, clustered together (redundancy, load balancing, etc)

Marco
 
Hi marco,

Ok I trust you, so it's amply suitable like that ;)
I think my problem is solved

Special thanks to Extcee and marco

superwemba
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!