Using openfiler inside a VM to serve the other VMs

yatesco

Well-Known Member
Sep 25, 2009
211
5
58
This might be madness and technically impossible, but I am thinking about how best to replicate my machines for disaster recovery.

I know about DRDB but I have been thinking about openfiler as well. The problem is that I don't want to have two dedicated boxes (redundancy) so I was wondering if the following would work, or would it just be plain madness.

I want to have N nodes, at least 2 of those nodes would run synchronised openfilers in a VM. All good so far. What I want to do though is also use the hosts running openfiler to have their own VMs running stored on the openfiler managed disk space.

The way it would work is that there would be two storage areas on the physical disk, one for the VM running openfiler and another for the disk space openfiler will expose for other VMs. The host would then connect to the openfiler managed LUNs (or whatever) and use this to store the disk for every other virtual machine.

A concrete example:

/dev/sda1 2GB
/dev/sda2 500GB

host1 has openfilerVM backed by /dev/sda1. openfilerVM uses /dev/sda2 to expose storage via iSCSi. host1 connects to this iSCSi storage and uses it to server vm2 and vm3.

Obviously there are timing issues - openfilerVM must be running before vm2 and vm3 can start.

host2 would have a similar disk partition but its openfilerVM would be a mirror of the openfilerVM in host1. host2 might also have vm3 and vm4.

host3 may or may not have its own (replicated) openfiler or it might just use the openfiler disk space from host1 or host2.

The main benefit is that it makes the most use of the hardware, i.e. I don't two extra dedicated openfiler machines.

All the VMs are very low traffic so a single 1Gbs NIC in each machine would be sufficient I think.

If any one node dies I can restore the config file (from a backup on openfiler) as the VM disk image will be on the SAN (openfiler). If one of the nodes with openfiler dies then openfiler is replicated on at least one other node, so no worries there.

I realise this is all madness for grown up organisations, but I am trying to do all this on the cheap :)

Thoughts and comments welcome!
 
This might be madness and technically impossible, but I am thinking about how best to replicate my machines for disaster recovery.

...

I realise this is all madness for grown up organisations, but I am trying to do all this on the cheap :)

Thoughts and comments welcome!

Hi,
why you wan't to use openfiler? For those things you need only services from the host.

I can give you an example what have i done:
Ech proxmox-host has the system and a huge backup-area on a raid1 with two 2-TB disks. pve-data are on fast sas-drives in raid10.
The backup-filesystem is mountet on /backup (done in /etc/fstab). below /backup is a directory for each host (proxmox1, proxmox2).
On proxmox1 i make a symlink from /backup/proxmox1 to /bckup, on proxmox2 similiar.
/bckup is defined as backup-dir in the proxmox-webgui.
with a cronjob i save the vm-configs and rsync all /backup:
Code:
5 21 * * * /usr/bin/rsync -a --delete /etc/qemu-server /backup/proxmox1/
5 21 * * * /usr/bin/rsync -a --delete /etc/vz /backup/proxmox1/
0 17 * * * /usr/local/scripte/rsync_backup.sh > /dev/null 2>&1

If one node are death, i can restore from the backup, or if the vm use externel storage (the most), i copy only the vm-config and start the vm.

Perhaps someone else knows a better solution, but this work for me (i hope so, because due now i haven't greater trouble with an proxmox-server).

Udo
 
Hi Udo,

Your solution works great for backups. I have a similar setup for backups, I:

* change the names of the backup files (so vzdump-openvz-1102-2010_01_28-22_01_37.tgz becomes vzdump-openvz-1102.tgz)
* use rdiff-backup so I can store historical versions
* use EncFS to create an encrypted disk
* rsync the rdiff-backup directory to the mounted encrypted disk
* umount EncFS
* rsync to an offsite (shared and therefore untrusted) provider

(EDIT: by the way Proxmox devs - the ease of backup, and the ease of adding new templates was one of the main reasons I chose proxmox over the free XenServer)

This works great for me and if my machine does blow up I can just ssh mount the offsite directory of encrypted files and use rdiff-backup to restore directly from there.

The problem with this is that there is any data since yesterday's backup is lost. This is going to get more severe when you consider some of the users of our VMs are around the world.

The other point is that I want to spend some bucks on a machine with RAID10 and lots of storage and then rent cheaper 'commodity' hardware without RAID or even large disks. All the VM will be stored on the iSCSI fast disk.
 
Last edited:
Hey Yatesco,

I tried to send you a 'private' msg but the forum controls prevented it; so I hope you see my post to this thread.

I found this thread because I googled it - looking at ways to setup openfiler to serve my proxmox cluster & VMs.

I personally wouldn't bother using iSCSI - I'd just mount the hdds (like /dev/sdb, sdc, sdd,for example) under OpenFiler and then Raid and export some OpenFiler network shares via samba / NFS etc.

This way you can have an openfiler VM serve any/every-thing on your network.

Configure two or more OpenFiler KVMs on two or more Proxmox Cluster Nodes and you have a 'KVM' based virtualized OpenFiler Cluster serving ALL of your other VMs and/or Network Servers/Desktops/Laptops etc.

If you see this post - let me know... I plan to build mine this month and I can show you what I come up with and/or would be interested in your 'opinion' (debate) any configuration models that apply to this kinda solution.

Of course 'my' little science project might be a complete disaster (e.g. very slow, IO might timeout/abort etc) - but I just have to try it.

It would be nice if Proxmox - PVE would add/build the same function/feature set into PVE natively - but I'm sure the developers are very busy and this kinda stuff isn't very high on there to-do list.
 
Last edited:
two or more OpenFiler KVMs on two or more Proxmox Cluster Nodes and you have a 'KVM' based virtualized OpenFiler Cluster serving ALL of your other VMs and/or Network Servers/Desktops/Laptops etc

I'm amidst replacing a FreeNAS KVM with openfiler to be able to use it's HA feature.
So far I have a 6TB drobo connected by eSATA to a SheevaPlug running NFS, on which there is one of 2 virtual disks attached to the KVM of which OF is making a RAID1, out of it and a locally stored VD.

If all goes well I'd like to try doing RAID10 with it, these eSATA Sheevas are great for NFS'ing external drives.

Currently I'm stuck on the details that (apparently must be customized to each box in order to) make OF see virtio block devices. I tried copying what was done from the openfiler bug 900, along with what was documented in this blog post for the list-disks script modifications, but didn't understand what was actually being done from it, so I wasn't able to adapt it to work.

might be a complete disaster (e.g. very slow, IO might timeout/abort etc)
How'd that turn out?

would be nice if Proxmox - PVE would add/build the same
It seems to be coming along, I saw a little teaser post here.
 
Sheesh,

It's been a month and I haven't had any time to work on this...

BUT,

I do have something to 'say':

1) I found this thread usefull - http://forum.proxmox.com/archive/index.php/t-1786.html

Basically, it's what I needed to figure out 'how' to attach REAL hdds to the OpenFiler VM in Proxomx (should also work with running KVM-qemu natively under any distro) etc.

2) A friend of mine tried to run OpenFiler as a VMguest using M$ Virtual PC (gag, cough, yuk!!)

After almost dying from laughter - I managed to ask how things went...
He did do some important testing - he setup a smb share and started to copy a dvd.iso from the laptop to the OpenFiler VMguest (server); then he pulled the plug on the entire 'server'.

Not sure if it was the Windows Host OS the M$ Virtual PC crap or OpenFiler - but it barfed the OpenFiler disk (which was only a Virtual Disk) and he lost ALL of the data on the smb share (including all the other files in the smb folder - before he did the copy/test). He claims the same test using freeNAS (also running as a Guest on M$ Virtual PC) didn't corrupt the smb share/disk.

Aside from his terrible, misguided approach to virtualization - the test demonstrates 'exactly' why I think we need to setup OpenFiler to run as a VMguest and boot from a Virtual Disk - BUT - only use REAL HDD for NFS/Smb shares.

I also think this will vastly improve the stability and performance of the Block-Level Cluster File system used by Openfiler etc.

BFN, got to start building my box and find out if I am 'right' - or just a complete idiot...
 
Last edited:
It seems simple enough to define devices in the qemu configs.

So far I've rearranged path names to make sense for a direct-access scheme and added the local & NFS directories via the storage page.

I'm lost at the moment as to how I'm going to define a limit to how much of the available space is to be used by the OF VM so it can do a RAID1 with them, now that there's no raw image to define the size.
I imagine there's a way to define a quota of 500GB on the dirs the storages are mounting, and while I too don't need one more thing on my to-do list, I'll have to look into it. I have a feeling quotas aren't going to allow the reporting of usable space properly for the software RAID to work with.

Interesting you should mention VirtualPC, I couldn't find any documentation saying whether or not it could pass VT extensions to a guest OS, so I just installed it this week to see.
I learned recently that whatever kind of VT Virtualbox is able to pass to the guest, it's not the kind PVE needs to run KVM, so I couldn't make a PVE node in Windows 7. Although I had a seperate network just for the cluster side, and they could ping each other on it, it still wouldn't sync. I wonder if it's related to the lack of KVM, especially since both are the 32-2 kernel & the vz side wasn't 'on'. So I have VirtualPC queued to try next.
 
Last edited:
Gotta say I know NOTHING about how my friend setup Virtual PC.. good luck.

Not sure if this helps - but - couldn't you just create a 'partition' on the HDD you want to give direct-access to Open Filer? i.e. on a 1.5TB disk you create a 500GB partition like /dev/sdc1 and pass that to the OpenFiler KVM?

Ideally you could create and use other partitions on the same disk (sdc2/sdc3) for other KVMs/VZs too.

Just sayin'
 
Last edited:
couldn't you just create a 'partition' on the HDD
Well, yes. I found creating a logical volume of a certain size is very easy, much like extending one.
It would seem I could make that mountpoint available to the VM through the qemu config file & it would appear to the machine the size of the logical volume.
I haven't been around to actually making it happen yet, and likely won't be able to until the weekend though.

I found a couple more helpful details overnight, too. One being that NexentaStor (3.0.3-RC5) works in KVM if you don't use the e1000 NIC, and with all the double-checking zfs does, it'll be even more likely to survive a power-pull scenario without corruption than the Openfiler & others, so that's queued to go, too.
Another with regard to something I vaguely recall reading about in the forum, but I couldn't find until after alot of looking.
About the use of fully-virtualized SCSI interfaces being a bad idea in pve.
I found (in a post somewhere that I didn't bookmark, again) that it was the 2.6.18 kernel that didn't support it, which is fine because I use the 2.6.32-2 anyway for the shared memory pages.
This is good because even if the direct-access doesn't happen for some reason, it's not a problem that I have all the IDE used up- I can still use scsi disk images as a plan B and keep NexentaStor installed to it's 3 IDE volumes regardless of how the data disks are configured.
Something new to figure out now is that since none but virtio & e1000 can do >1500 mtu, how significant it is to have a small chunk, and whether the added overhead of a possible team of fully-virtualized r8139's would be worthwhile. I'm inclined to believe it's not so important, and I'll likely forego sorting that.
 
ok... let's have a race! LOL

don't worry - you'll most likely win even if you do nothing between now and xmas...

but a race is a race... ready, set... GO!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!