DRBD and OpenVZ

zordio

New Member
Jun 22, 2009
21
0
1
I'd like to set up ProxMox servers with shared storage, to make migrating faster, and failover possible. If my understanding is correct, this is possible with KVM, but OpenVZ doesn't support this.

So, is it possible to try OpenVZ on DRBD anyway? I know that migrating from the webui won't work, but would it be possible to just suspend it on one server, and resume it on the other?
 
So, is it possible to try OpenVZ on DRBD anyway? I know that migrating from the webui won't work, but would it be possible to just suspend it on one server, and resume it on the other?

OpenVZ uses a mounted ext3 filesystem. So you ned to make sure that only one host has access to that filesystem.
 
I believe I see now. So my best option ifor now is to have one partition for openvz and one for KVM.
 
OpenVZ uses a mounted ext3 filesystem. So you ned to make sure that only one host has access to that filesystem.

pve-2.0 will support this I think Are we near a Beta for 2.0 ?

I was thinking of this setup does it make sense ?

node 1 will have 2 FS est3 over drdb:

/mnt/drbd-node1
/mnt/drbd-node2


node 2 will have:

/mnt/drbd-node1
/mnt/drbd-node2

I will be running containers on node 1 from local dir /mnt/drbd-node1 and I will be running node 2 containers from local dir /mnt/drbd-node2

In case I loose node1 or node 2 I will be able to start all container from one node ...

But I'm not sure if I can mount both ext3 FS on both node even if only 1 node will be writing to the FS at the same time .... Probably not hein ?

Which FS would allow this ?

My other options is to pass all my containers to OpenVZ but we still have lot of server that don't have the INTEL OR AMD swith that enable kvm ..... hum

Any advise on having 2 nodes with manual failed over capability would be very interesting ....

Maybe 2.0 is near to be ready ?

Thanks for any info on this matter.

Guillaume.
 
I have tried a dual-primary configuration with OpenVZ.

We used the OCFS2 filesystem to have both primaries be able to mount the DRBD resource at the same time.

While the drives mounted correctly and there was no data corruption, we found that our I/O Waits were constantly above 50%, this was causing load level to reach +100 on an average day for 10 vz containers.

I would stick to having DRBD in Primary/Secondary mode and a standard filesystem mounted on one of the nodes. Useful for Heartbeat style failovers, useless for a shared storage system.

Just my two pence worth.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!