Can I tell Proxmox that a storage directory is shared across nodes?

G

GomoX

Guest
Hey,

When setting up iSCSI or NFS storage devices on Proxmox, the system already "knows" they are shared and makes them available for the selected nodes.

When setting up local directories, Proxmox assumes they are only usable locally. I would like to know if there is a way to let PVE know that a local storage dir is actually shared (i.e, assume /opt/shared on each of the nodes will be replicated across nodes by an external mechanism and use it as if it were an external device).

I'm trying to avoid the overhead of mounting local dirs over NFS with a virtual IP to make them highly available instead of just using "local dir" as the storage mechanism. This in turn would actually make supporting a variety of unsupported NAS protocols (Gluster, Ceph, etc) much easier (just figure out how to mount them locally and then tell PVE that they are externally replicated).

Thanks!

Gonzalo
 
Oh, that's easy :) I take it I have to prepare the same local path on all nodes for this to work, right?

Thanks,

Gonzalo
 
Great I just tested a KVM live migration on top a locally mounted GlusterFS volume and it appears to work perfectly. I will do some more testing.

Gonzalo
 
can you do some benchmarks with gluserFS?
 
Yes, although bear in mind that I expect to run GlusterFS servers locally on the PVE nodes (so that all the nodes have the same setup and I don't need specific storage nodes).

Do you have a specific benchmark tool that I can use?
 
bonnie++, iozone and fio are excellent tools. Combine bonnie++ with either iozone or fio.

Test examples:
sudo bonnie++ -b -uroot -d/tmp 2>&1 | tee bonnie.txt (bonnie++ will create a test file that is double the size of installed RAM)
fio --filename=/tmp/test1 --rw=randwrite --bs=4k --iodepth=40 --size=10000M --group_reporting --name=file1 --ioengine=libaio 2>&1 | tee fio.txt
 
OK I will set up the new server in the next few days and post benchmark results afterwards. Let me know is there is any specific setup you would like me to try out.
 
Hi, is anywhere explained how this "shared flag" for local directories works in "storage", and how it works within the cluster scenario discussed?
I dont really get how one is supposed to use it, nor I see any help, or it doesn't work for me.
If I follow the suggestion from Gomox "I take it I have to prepare the same local path on all nodes for this to work"... I just get a local folder on /dev/mapper/pve-root ...
which is not the expectation...
 
  • Like
Reactions: MikeP
Hi, is anywhere explained how this "shared flag" for local directories works in "storage", and how it works within the cluster scenario discussed?
I dont really get how one is supposed to use it, nor I see any help, or it doesn't work for me.

I'm also interested in setting up a shared GlusterFS volume, could you please elaborate on setup and an example use case of your proposed scenario?
 
I'm using Gluster in my production environment for storage.


My Gluster cluster consists of 6 nodes, 4x 1TB bricks in each server, in distribute-replicate. The Gluster nodes replicate on private 10GB network (2 nic bond, active-passive) and clients connect on a 1GB network (2 nic, XOR balance). Each Gluster node is setup as an iSCSI target server with boot target images, stored in the Gluster storage, for the Proxmox nodes.


My Proxmox cluster consists of 8 nodes, they boot from iSCSI off the Gluster servers, private network for backend storage/booting (2 nic, XOR balance) and connection to our infrastructure lan (2 nic, XOR balance). I've setup Proxmox with a local directory set as a shared storage device for all nodes, pointing to a Gluster mount point. Load the fuse module at boot to get fstab entries to work properly on boot:
echo fuse >> /etc/modules


Install Gluster as per the regular instructions:
http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.1/Debian/readme.txt


Add _netdev to the fstab entry so it loads right after the network comes up, for example:
gluster-dns-rr:/proxmox /srv/gluster glusterfs defaults,_netdev,fetch-attempts=10,backupvolfile-server={backup IP address to gluster server) 0 0


I use DNS round robbin, but if DNS is down I use a backup IP address which is a different IP for each Proxmox nodes fstab. I've also put in a high fetch attempt value in case the ports take a while to come up if fastport isn't enabled on our switches.

I've set our VZDump speed limit to 50MB/sec and I'm always hitting between 45 and 50MB/sec when all the servers do their snapshot dumps over the weekend. I've set each VM's HDD read/write limit to 45MB/sec, as none of them have any processes on them that need fast disk access - most are web servers, accessing external Postgres/MySQL servers and having the htdocs sit on a ramdisk setup at boot.

This setup has been working for us for about 4 months, only issue I've had is that the directsync disk cache mode doesn't work (VMs won't boot), but switching to writethrough works great.


During testing I pulled the power on 1/2 the Gluster and 1/2 the Proxmox nodes and everything worked great with Proxmox HA. When I powered the offline nodes back up everything sync'd up fine.

Only thing I'd like to see is auto load-balancing of VMs across the Proxmox cluster. When the HA VMs came back online they were all started on one Proxmox host, which kind of sucks. It's by no means a deal breaker in our environment, it just means that if we ever had a node failure, or multiple failures, we'll have to balance the VMs manually.
 
Hi, is anywhere explained how this "shared flag" for local directories works in "storage", and how it works within the cluster scenario discussed?
I dont really get how one is supposed to use it, nor I see any help, or it doesn't work for me.
If I follow the suggestion from Gomox "I take it I have to prepare the same local path on all nodes for this to work"... I just get a local folder on /dev/mapper/pve-root ...
which is not the expectation...

Hey proxfm,

The way you set it up is you create a directory on all the Proxmox nodes and mount the shared filesystem there (same mount point on all the servers). You then add a new storage of type "directory" pointing to that location and tick the "shared" checkbox. What this does is attach a storage to all the hosts but let ProxMox know it is shared. This means that migrations don't actually copy the VM storage over but are aware of the fact that the data is available on all the hosts.

This is what my config looks like for gluster:

Screenshot-28.png

The fstab on each node had to be edited manually to mount /mnt/gluster at boot time, and like Kyc pointed out you have to add fuse to /etc/modules so that it gets mounted soon when starting the servers.

Gonzalo
 
old thread, sorry but this is the first ive seen that discusses what I need - and this is relevant.

you have to manually edit the fstab on each container to mount the shared mountpoint? I actually dont want it shared, I wanted to split out an ssd as a seperate mount on each container - which i've got now through a manual 100.mount script in the conf dir, but i'd like to force container-based quotas on it to give 10GB to each container for eg.

is there a way to do that?
 
It's not clear how to set up shared folder.
Suppose I have cluster from three nodes. One node has extra hdd /dev/sdb1 that I want to share for all nodes.
I expect that I will be able to create VM on any node and select /dev/sdb1 as a storage (like shared nfs mount).
It is possible to achieve that with Storage-Add Directory?
Or I'm on the wrong way and I need to setup nfs server on the node with /dev/sdb1 and share this drive?
 
Hi, is anywhere explained how this "shared flag" for local directories works in "storage", and how it works within the cluster scenario discussed?
I dont really get how one is supposed to use it, nor I see any help, or it doesn't work for me.
If I follow the suggestion from Gomox "I take it I have to prepare the same local path on all nodes for this to work"... I just get a local folder on /dev/mapper/pve-root ...
which is not the expectation...
Please explain this.
What does shared do? How does it work? Why, without sharing, do I have my SMB disk showing on pve1 and pve2? With sharing enabled it doesn't seem to be any different, I don't see containers on the disk...
It must not look at the disk, it must look at some internal file.
 
Hi guys,

Sorry to reply to this old thread, but I am using proxmox 4.1 and I do have 3 nodes cluster.

I have created a volume group and consequently a logical volume on top of iscsi volume of 10TB, and mapped the lv on /mnt/Backups to all nodes.
All nodes can see and write on this lv.
I have added a dir from prxomox web UI and pointed to /mnt/Backups allowing all nodes, ticked the enabled and shared checkbox

But when I did the backup of one VM it did not update the usage of the dir across all nodes and from the other nodes I can not see the backup .

Any help please , this is driving me nuts.

Thanks.
 
I formated the lv on top iscsi volume with xfs and mounted on /mnt/Backups in all nodes.

Than I made the addition from web UI
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!