Proxmox VE 4.0 beta1 released!

Status
Not open for further replies.
is there a way for now to test cluster ve 4.x and ha with 2 nodes like "Two-Node High Availability Cluster" for proxmox 3.x ?

If you want reliable HA - and if you need HA you always want reliability - you need at least 3 nodes. This is true for a 2.x, 3.x and also for a 4.x HA cluster.

A two node setup is NEVER optimal and not recommended. If you have just two boxes, just run a cluster without HA.
 
If you want reliable HA - and if you need HA you always want reliability - you need at least 3 nodes. This is true for a 2.x, 3.x and also for a 4.x HA cluster.

A two node setup is NEVER optimal and not recommended. If you have just two boxes, just run a cluster without HA.

thanks for reply...i know that HA with two nodes is suboptimal..i only wanted to know if it is possible to configure it for testing like in proxmox 3.x...is it?
 
thanks for reply...i know that HA with two nodes is suboptimal..i only wanted to know if it is possible to configure it for testing like in proxmox 3.x...is it?

They are no more quorum disk support in proxmox4 (corosync2 don't implement it), so no it's not possible.
 
unicast corosync makes testing in virtual machines easier and allows for small networks without fancy switches.

note to anyone else reading this: for testing in linux you can "echo 1 > /sys/devices/virtual/net/br0/bridge/multicast_querier"
 
unicast corosync makes testing in virtual machines easier and allows for small networks without fancy switches.
Multicasting works with VMs, too. And since when are switches a problem there?
 
The problem I face with 2 node cluster is that if one node is down (and I can physically see it down), start the VMs on the surviving one can't be done through web interface (so the customer has to call for assistance and wait until it can be provided).
Has in 4.0 been provided a way to solve this for the "adventurous" people :) ? If the user is in front of the failing node and is told to unplug it's power and start VMs through web interface (no, 99% users can't ssh into the box and run a script or cp for vm config) it's a fair situation, isn't it?
 
Hello and congratulations of choosing LXC to containers !
Old kernel won't be needed to serve OpenVZ at last.

But there is one thing you could change or improve.

LXC developpers has written a new hypervizor for LXC: LXD.
For you LXC is a new function which you will adding to ProxmoxVE,
but LXC will be migrating to LXD.

In my opinion adding LXD to ProxmoxVE could be a good solution.

What are you thinking about it

Regards

Grzesiek
 
Hello and congratulations of choosing LXC to containers !
Old kernel won't be needed to serve OpenVZ at last.

But there is one thing you could change or improve.

LXC developpers has written a new hypervizor for LXC: LXD.
For you LXC is a new function which you will adding to ProxmoxVE,
but LXC will be migrating to LXD.

In my opinion adding LXD to ProxmoxVE could be a good solution.

What are you thinking about it

Regards

Grzesiek
 
LXD is only the daemon and we have or own daemon.
 
When it is stable there will be a update path.
 
ok, thanks.
At the moment i have some problems with installation and resolution, i can't see accept or next button. Strange problem.
 
Use Alt+g or Alt+n.
What do you use for viewing?
 
Hi,
what did you meen with active failover?
did you mean that a vm is in replicated to hot-stand-by?

Is such a thing even possible with KVM? And by such a thing i mean a feature comparable to VMWare Fault Tolerance (Where a shadow VM is running in memory sync on another host).

Kind regards,
Caspar
 

New Proxmox VE HA Manager for High Availability Clusters


The brand new HA manager dramatically simplifies HA setups – it works with a few clicks on the GUI, test it! And it elminates the need of external fencing devices, watchdog fencing is automatically configured.

Great news, I always would find a hole to fall into when setting up fencing and getting things all whacked.
 
AFIK some guys are working on that but not stable.
But for this feature you need extreme performance network.
 
This is exciting news - I am intrigued to test out LXC support and simplified HA.

On the topic of HA, I see that the software-based fencing (watchdog) is recommended. How does this address the concern of a node being disconnected from the network and going rogue (e.g if the watchdog daemon dies during the 60-second window)? Is it possible to configure hardware-based fencing also (e.g with a network-accessible PDU or UPS) and have the cluster prefer using the softdog but fall back to using the hardware STONITH if necessary?
 
How does this address the concern of a node being disconnected from the network and going rogue (e.g if the watchdog daemon dies during the 60-second window)?

There is no watchdog daemon (it is a watchdog timer ...)

Is it possible to configure hardware-based fencing also

Yes, the plan is to support both. But watchog feature works great so far, so we have not implemented it till now.
 
There is no watchdog daemon (it is a watchdog timer ...)



Yes, the plan is to support both. But watchog feature works great so far, so we have not implemented it till now.

Still, isn't there a state where the watchdog timer could be rendered inoperable, and thus the rogue node would continue running?

I would be happy to help beta test out a fall-back hardware STONITH device - please let me know when this feature is available!
 
Status
Not open for further replies.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!