Complex problem -- maybe good for a case study

rayk_sland

Active Member
Jul 30, 2009
53
1
28
I have an Intel mfsys25 with a 'shared lun' configuration. I have 2 server blades running clustered with proxmox 1.3 with a shared storage pool which both connect to using ocfs2. the ocfs2 pool is mounted on /var/lib/vz so that if I need to I can manually shutdown a vm, move the config file to the other server and start it up again without having to copy any data.

I recently purchased enough disk space to create a second shared pool and run proxmox 1.4 on the two other blades, so I could try it without jeopardizing my live set up above. I'm fascinated that the new shared storage model does not use qcow images in the shared volume group, but rather actual logical volumes. What I was hoping to do was copy over the vms from the 1.3 cluster to the 1.4 cluster while I update the 1.3 blades, but they are qcow files, not raw logical volumes. this has made it difficult for me.

Questions:
1) if I simply upgrade 1.3 to 1.4 will it screw up my slightly non standard shared storage config?
2) is there any way to configure up a shared storage configuration to support booting guests off of qcow files on shared filesystem instead of lvm volumes from a shared volume group. and have be able to migrage vms like that? I was thinking of using clvmd instead of ocfs, so I can have the snapshotting feature...
3) any other info I should know?
 
1) if I simply upgrade 1.3 to 1.4 will it screw up my slightly non standard shared storage config?

no, an update should not screw up anything.

2) is there any way to configure up a shared storage configuration to support booting guests off of qcow files on shared filesystem instead of lvm volumes from a shared volume group. and have be able to migrage vms like that? I was thinking of using clvmd instead of ocfs, so I can have the snapshotting feature...

You can use either LVM or ocfs2 - but ocfs2 with clvmd seems a strange idea. I suggest you try to use LVM on a shared storage device.
 
You can use either LVM or ocfs2 - but ocfs2 with clvmd seems a strange idea. I suggest you try to use LVM on a shared storage device.


you misunderstand me. OCFS2 is on 1.3 cluster and I want to migrate away from that. after I have the 1.4 cluster up and live.
 
Last edited by a moderator:
Trying to go with a shared directory using clvm and gfs.

Problem... on installing gfs-tools, apt wants to upgrade me away from
pve-kernel-2.6.24-8-pve to linux-image-2.6.26-2.

Probably not going to work with proxmox ve. I assume...
 
Trying to go with a shared directory using clvm and gfs.

Problem... on installing gfs-tools, apt wants to upgrade me away from
pve-kernel-2.6.24-8-pve to linux-image-2.6.26-2.

Probably not going to work with proxmox ve. I assume...

yes, will not work and is not supported. what is the special benefit here and what do you want to solve?
 
what is the special benefit here and what do you want to solve?

I want shared storage of qcow2 files so that
1) migrating guests from one physical server to another can be done quickly without the use of rsyncing images.
2) guests can be copied up as files (I also like that qcow2 files are only as big as they need to be -- raw volumes are static)
3) guests can be backed using lvm snapshots.

I gather ocsf2 (shared directory) on pve 1.4 will give me 1 and 2 but not 3
and lvm (shared storage) which creates raw volumes will give me 1 and 3 but not 2

I was hoping that gfs on clvmd (shared directory) would give me all three...
 
I want shared storage of qcow2 files so that
1) migrating guests from one physical server to another can be done quickly without the use of rsyncing images.
2) guests can be copied up as files (I also like that qcow2 files are only as big as they need to be -- raw volumes are static)
3) guests can be backed using lvm snapshots.

I gather ocsf2 (shared directory) on pve 1.4 will give me 1 and 2 but not 3
and lvm (shared storage) which creates raw volumes will give me 1 and 3 but not 2

I was hoping that gfs on clvmd (shared directory) would give me all three...

whats wrong with storing qcow2 images on a simple NFS server? (no LVM snapshots but 1 and 2)
 
whats wrong with storing qcow2 images on a simple NFS server?

Well the hardware I have is an intel midsize blade server (MFSYS25) with a built in SAS SAN. (it uses a shared LUN if that means anything to you). Gives me very nice disk performance, probably way more than NFS would. I'm not in the habit of using NFS but even at gigabit speeds I can't imagine that it would compare to this.
 
Well the hardware I have is an intel midsize blade server (MFSYS25) with a built in SAS SAN. (it uses a shared LUN if that means anything to you). Gives me very nice disk performance, probably way more than NFS would.

If you need performance, you should use a shared LVM group (not qcow2 on a cluster filesystem).
 
Well performance is not really the issue. I only mention it because if I did move over to NFS, I'd probably get a pretty good performance hit. I have some mission-critical qcow2 guests on the 1.3 pve cluster that I want to copy over to the new shared storage on the 1.4. pve cluster and bring right up while I upgrade 1.3. I thought that would be a good procedure -- I could standardize the storage setup without jeopardizing my data. If I use a shared volume group, I can't do that as neatly as I wanted to, quickly. Backing them up using vzdump will take way too much downtime vzdump halts the guest and the bigger guest is a 380GB file