Storage Model on PVE 3.1

screenie

Active Member
Jul 21, 2009
157
7
38
Hello,

I have an productive PVE 1.9 cluster and thinking about setting up a new PVE 3.1 cluster and moving the VM's over to the new 3.1 version;
On PVE 1.9 i'm using LVM's on DRBD with 2x 10GBit/s between the Nodes for the replication because i do not have an external Storage like NFS or iSCSI;
That means i only have KVM based VM's replicated to the second node - for the OpenVZ VM's i have to restore on the second Node from my hourly backups if the primary Node fails;
My first question is now if i should setup again LVM's on DRBD like on my PVE 1.9 setup or is there another better way to do that on PVE 3.1?
The second question would be how to migrate from PVE 1.9 to PVE 3.1;

Has anyone some hint's for me or is there some documentation availiable someone can point me to?

Thanks.
Alex
 
Last edited:
Hi Alex,
the storage modell hasn't changed - LVM on DRBD is also with 3.1 an good choice.
The only thing: it's recommendet to use min. 3 nodes because of qorum.

Because update: Backup, new installation and restore is one posibility or the script to do an update (work with 2.x - don't know if you can directly upgrade to 3.1).
Due to the many changes, I would do an fresh install (you can leave your DRBD-devices intact and do an migration by hand (with short downtime) - there are some changes in the VM-config but easy to apply).

Udo
 
Hi Udo,
Thanks for your reply - 3 nodes could be a issue since i do not have 10G switches and have 2x 10G interfaces on each node connected to the other node;
I could try bridging 3 nodes together with 2x 10G interfaces - not sure if that will work....
I assume the need of 3 nodes because of qorum comes from RHCS? Do not have experience with RHCS yet....
Wonder which troubles i could get if only stay with 2 nodes;
Update would not be a problem - i'm not going this way, i prefer also a fresh install as i like to have things clean from the beginning - question is if a can do a backup on 1.9 and a restore on 3.1 without big issues;
So, moving all to the master node of 1.9, wipe the secondary node an do a fresh 3.1 setup, create the cluster config and do the backup/restore from 1.9 to 3.1 - this should work, or?


thx,
Alex
 
Just found the 'Two nodes cluster and quorum issues' in the wiki pages of PVE 2.0 - understand that part....

So i have a question on that - instead spending money on a 3rd server (what is not needed from the number of vm's and workload), is it possible that the dummy node is a vm on the master node?
If the master node die - the dummy node should be fail-over to the remaining node and everything is fine, or?

Alex
 
Just found the 'Two nodes cluster and quorum issues' in the wiki pages of PVE 2.0 - understand that part....

So i have a question on that - instead spending money on a 3rd server (what is not needed from the number of vm's and workload), is it possible that the dummy node is a vm on the master node?
If the master node die - the dummy node should be fail-over to the remaining node and everything is fine, or?

Alex
Hi,
you can use an cheap node as third one like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16859107921
(you can find them cheaper in the web).
But this is not easy without 10GB-Backbone (and 1-GB connection for the third node).

Udo
 
...
Update would not be a problem - i'm not going this way, i prefer also a fresh install as i like to have things clean from the beginning - question is if a can do a backup on 1.9 and a restore on 3.1 without big issues;
So, moving all to the master node of 1.9, wipe the secondary node an do a fresh 3.1 setup, create the cluster config and do the backup/restore from 1.9 to 3.1 - this should work, or?
...
yes - but in this case you don't need an Backup/restore (but an good backup is allways good) - if the DRBD-devices are in sync on the 3.1-host, you can copy the VM-config to the 3.1-host (new destination: /etc/pve/qemu-server/), change the network-settings (look in an new created VM-conf for the changes), stop the VM on the pve1.9-host and then start the VM on the new node! That's it's.

Udo
 
yes - but in this case you don't need an Backup/restore (but an good backup is allways good) - if the DRBD-devices are in sync on the 3.1-host, you can copy the VM-config to the 3.1-host (new destination: /etc/pve/qemu-server/), change the network-settings (look in an new created VM-conf for the changes), stop the VM on the pve1.9-host and then start the VM on the new node! That's it's.

Udo

Just to note, you cannot COPY the vm config inside /etc/pve/..., you need to MOVE it. there is a protect here to prevent identical VMIDs.
 
ok, thanks guys - will add a third 1U node, maybe i have to go switch to 1G interfaces because bying 2x 10G switches is not in the budget;
 
The KVM based VM's are on LVM's on the DRBD device and replicated - the OpenVZ VM's i currently have only on one of the nodes;
To have the OpenVZ's also replicated and available for automatic fail-over if the node where the OpenVZ VM is running on - which storage type do i need?
Is GlusterFS or Sheepdog implemented for that?

thx,
Alex
 
ok, Sheepdog is a storage solution for KVM/QEMU - so GlusterFS to use to replicate OpenVZ VM's?

thx,
Alex
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!