Shared openVZ on iSCSI

Nov 12, 2010
54
0
6
Hi there,

I have setup iSCSI with two proxmox servers sharing the same storage and have mounted the same iSCSI target on both servers as /var/lib/vz and migrated the existing data from server 1 into there.

To share openVZ VM's between both server nodes for high availability or load balancing i have done the following:

* Setup the same mount points on each server
* Setup 101.conf on server1 and 1101.conf on server 2 which both point to root/101 and private/101
* setup a script called migrate that will stop the specified CTID vm on the specified server, and remotely start the vm on the specified server. For example 'migrate 101 prox1 prox2' or from the other direction 'migrate 1101 prox2 prox1'

This scenario appears to be working ok from what i have tested, however i would like to get some thoughts from others out there as to what they have done and if this setup is basically the way to do it for openVZ

Thanks in advance!

Regards,
Peter
 
BY way of an update i tested this setup further and having 2 separate promox servers attached to the same lun causes locking issues.

So I am now testing having split LUNS 1 for each server and then seeing if migration will work that way. No reason why it shouldn't seeing as they are mounted as a local file system at /var/lib/vz

Again though any feedback as to the correct approach would be welcomed :)

Cheers
 
OpenVZ does not support shared storage, so I do not see how this should work. what files ystem do you use on var/lib/vz?
 
HI Tom,

I have only attached the iSCSI target in Proxmox and not created the LVM.

I then dropped down to the cli and use fdisk /dev/sdd to create a primary partition and then mkfs.ext3 /dev/sdd1 (/dev/??? obviously depends on how many targets you have connected)

Then in /etc/fstab add the mount and change the local device map to vz1 so i can then copy the files over.

/dev/sdd1 /var/lib/vz ext3 defaults,auto,_netdev 0 0

The problem with two servers accessing this lun directly is that when both are mounted updates to the file system only appear on the on the server that made them unless you umount and mount the lun again. Something i want to avoid as there will be multiple servers running. I have tried a number of steps but I don't know of a way to refresh a mounted drive without remounting it.

I have split the LUN into two separate LUNS now and will be testing have 1 LUN used per server and see how things run then
 
Hi Tom,

Ok so this is what happens now when i do a migrate (not live migrate)

command finishedAbort/usr/bin/ssh -t -t -n -o BatchMode=yes 10.5.0.13 /usr/sbin/vzmigrate 10.5.0.14 101
Starting migration of CT 101 to 10.5.0.14
Preparing remote node
Initializing remote quota
Syncing private
Stopping container
Syncing 2nd level quota
Starting container
Cleanup
Connection to 10.5.0.13 closed.
VM 101 migration done



So that is working nicely. I will say that it creates alot of network overhead as it is copying from iscsi => home server => target server => iscsi so this is really needs to be in a dedicated storage network so as to avoid impacting other live vm's.

I have noticed also that rsync deletes the private/CTID folder and all it's contents once it has done the migration. Wouldn't it be far better to leave the copy on the server so if you want to migrate it regularly it only needs to sync changed files, and therefore reduce the network overhead and speed up migration? That way sysadmins can also pre-create copies of their likely migration candidates beforehand and make life a bit easier....

BTW this is what happens when i do a LIVE migration:

/usr/sbin/vzmigrate --online 10.5.0.13 101
Starting online migration of CT 101 to 10.5.0.13
Preparing remote node
Initializing remote quota
Syncing private
Live migrating container...
Syncing 2nd level quota
Error: Failed to undump container
vzquota : (error) Quota is not running for id 101
VM 101 migration failed -
 
OpenVZ does not support shared storage, so I do not see how this should work. what files ystem do you use on var/lib/vz?

This is how i have it setup:

Server 1

prox:/var/lib/vz/private# fdisk -l

Disk /dev/sdc: 48.0 GB, 48049946624 bytes
64 heads, 32 sectors/track, 45824 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk identifier: 0x22ea0421

Device Boot Start End Blocks Id System
/dev/sdc1 1 45824 46923760 83 Linux

Disk /dev/sdd: 47.2 GB, 47278194688 bytes
64 heads, 32 sectors/track, 45088 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk identifier: 0x79266f52

Device Boot Start End Blocks Id System
/dev/sdd1 1 45088 46170096 83 Linux

******************************************************************
prox:/var/lib/vz/private# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 2.0G 709M 1.2G 38% /
tmpfs 248M 0 248M 0% /lib/init/rw
udev 10M 580K 9.5M 6% /dev
tmpfs 248M 0 248M 0% /dev/shm
/dev/mapper/pve-data 3.5G 730M 2.8G 21% /var/lib/vz1
/dev/sda1 504M 31M 448M 7% /boot
/dev/sdb1 45G 1.1G 41G 3% /var/lib/vz
10.5.0.6:/mnt/data_vg/images/ISO
12G 713M 11G 7% /mnt/pve/test_NFS
/dev/sdc1 45G 1.1G 41G 3% /var/lib/vz

***********************************************************************
prox:/var/lib/vz/private# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz1 ext3 defaults 0 1
UUID=ccf0a892-784b-4f07-9d51-0c081fa988b2 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/sdc1 /var/lib/vz ext3 defaults,auto,_netdev 0 0

*****************************************************************

Server 2

pprox2:/var/lib/vz/private# fdisk -l

Disk /dev/sdb: 48.0 GB, 48049946624 bytes
64 heads, 32 sectors/track, 45824 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk identifier: 0x22ea0421

Device Boot Start End Blocks Id System
/dev/sdb1 1 45824 46923760 83 Linux

Disk /dev/sdc: 47.2 GB, 47278194688 bytes
64 heads, 32 sectors/track, 45088 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk identifier: 0x79266f52

Device Boot Start End Blocks Id System
/dev/sdc1 1 45088 46170096 83 Linux


*****************************************************************

prox2:/var/lib/vz/private# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 2.0G 742M 1.2G 39% /
tmpfs 248M 0 248M 0% /lib/init/rw
udev 10M 576K 9.5M 6% /dev
tmpfs 248M 0 248M 0% /dev/shm
/dev/sda1 504M 31M 448M 7% /boot
10.5.0.6:/mnt/data_vg/images/ISO
12G 713M 11G 7% /mnt/pve/test_NFS
/dev/mapper/pve-data 3.5G 225M 3.3G 7% /var/lib/vz1
/dev/sdc1 44G 334M 41G 1% /var/lib/vz


******************************************************************

prox2:/var/lib/vz/private# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz1 ext3 defaults 0 1
UUID=a3a2eb86-9159-4210-b276-3b496dd6b485 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/sdc1 /var/lib/vz ext3 defaults,auto,_netdev 0 0

***********************************************************************

In the output from fdisk -l the disks are appearing with different /dev names but the identifiers are the same on both servers - so even though on the second server it looks like i am mapping the same device /dev/sdc1 it's actually the 2nd LUN i am mapping (sdd on the first server, which is sdc on the 2nd server).

As i said offline migration works perfectly and the server is only down for the time it takes to shutdown the container and start it on the other server so ping times out for approx 5 seconds only

Not sure why everyone has so much trouble getting this working it was pretty straight forward - perhaps you would like me to do a wiki for setting up openvz on iSCSI?

Cheers,
Peter
 
Ok I am writing up the WIKI now - well trying to - How the hell do you create a new HOW TO?!?!?!?!?!?

I can edit existing HOW TO's but i cannot create a new one..... am i missing something?
 

Attachments

  • howto-create-wiki-pages.png
    howto-create-wiki-pages.png
    77 KB · Views: 15

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!