Good Morning/Evening,
I've been a bit of a lurker over the last 3 months or so reading the various forum posts after discovering Proxmox (courtesy of wikipedia) when searching for an alternative to the world of Microsoft's Hyper-V and Vmware's ESXi.
I must have first read about KVM virtualisation a few years ago in its early days but was scared off with the complicated setup (just looking at the command prompts to start a session looked like writing some rocket science equation), so to see an open source solution that felt mature, simplified the process with a web interface and had a vibrant and active community behind it was nice to discover. For the past three months I have therefore pitted Proxmox against ESXI 5.1 and Hyper-V to see where its strengths and weaknesses lie, even to the point of yanking out the sata cable from my drive when running proxmox to see how it would cope with a drive going completely offline on it.
While KVM is not the fastest out of the box (comparing various benchmarking utilities), each release has seen it improve performance and enhance features so I have confidence it will continue to improve over time and offer a real alternative to users and businesses looking for a cost effective supported virtualisation platform. This of course leads me to the reason I've been performing these tests. At work our current servers are starting to show their age, so I proposed getting a rather juicy new server so as to offload as much as possible, refreshing the existing servers as additional nodes and setting up a cluster so as to take over the world and enslave all humanity.... Management wasn't too keen on taking over the world, but liked the idea of re-using existing hardware.
Configuring the server is the easy part, the storage part is however another issue. The options available to us are either local storage (which while appealing from a cost perspective, means no live migration, limited upgrade paths and capacity), a SAN (Dell MD3200 series) which while great from a 'looks good in a rack, lots of lights perspective, is pricey given my budget constraints (and the TCO element of having to replace every few years) or finally something a bit more open source such as GlusterFS which offers high availability and scalability..
So for the past 3 weeks I have been trying to get gluster and proxmox to play nice so as to test how it would work and demonstrate that is a feasible alternative to spending thousands on a SAN array.. Unfortunately my experiences so far have been unsuccessful at getting the two to work together. I can easily enough a gluster cluster working using Debian 7.0.2 using the administration guide documentation, I can on a fresh debian installation install the gluster client and mount a gluster volume with no issues, however attempting the same on proxmox leads to a failed mount.. I have probably rebuild a gluster array 3-4 times now, spend hours searching forums and I feel I am no closer to a solution, and given nobody else is having the same issue I can only assume I am doing something really silly and obviously wrong.
My gluster servers are sitting within a virtual environment setup in a replica as per the gluster file system administration guide (volume name is test-volume) replicating on gluster1 and gluster2 ( both resolvable to each other via DNS)... As I mentioned earlier I can install the gluster client and mount the volume successfully with no issues on a fresh vm running debian .. The same command line on proxmox however generates an error.
Now for the details..
GlusterFS server version 3.2.7 built on Sep 28 2013 ( I have installed glusterFS on both debian 7.0.2 32 and 64bit flavours with the same behavior)
pve version
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
Attempting to mount glusterfs at the commandline:
root@proxmox:/mnt# mount -t glusterfs gluster1:/test-volume /mnt/gluster/
Mount failed. Please check the log file for more details.
Log file reveals:
[2013-10-27 08:56:01.195545] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.0 (/usr/sbin/glusterfs --volfile-id=/test-volume --volfile-server=gluster1 /mnt/gluster/)
[2013-10-27 08:56:01.226719] I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled
[2013-10-27 08:56:01.226749] I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread
[2013-10-27 08:56:01.228177] E [glusterfsd-mgmt.c:1655:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/test-volume)
[2013-10-27 08:56:01.228326] W [glusterfsd.c:1002:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0xcd) [0x7f720981351d] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0xa4) [0x7f7209813194] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3e9) [0x7f7209ece4f9]))) 0-: received signum (0), shutting down
[2013-10-27 08:56:01.228343] I [fuse-bridge.c:5217:fini] 0-fuse: Unmounting '/mnt/gluster/'.
Attempting to mount glusterFS from within the proxmox GUI does not work either. Clicking on the volume dropdown shows an update icon for a brief moment before showing no list options.
Hopefully someone here with more experience with Gluster can offer some insights in what aspect I am obviously doing wrong and guide me in the right direction. My ideal goal is to setup the gluster client to use more than one node for high availablility but given I haven't even been able to connect to a single node it's kind of stopped me in my tracks of testing and progressing further...
Thank you in advance.
Regards,
Richard
I've been a bit of a lurker over the last 3 months or so reading the various forum posts after discovering Proxmox (courtesy of wikipedia) when searching for an alternative to the world of Microsoft's Hyper-V and Vmware's ESXi.
I must have first read about KVM virtualisation a few years ago in its early days but was scared off with the complicated setup (just looking at the command prompts to start a session looked like writing some rocket science equation), so to see an open source solution that felt mature, simplified the process with a web interface and had a vibrant and active community behind it was nice to discover. For the past three months I have therefore pitted Proxmox against ESXI 5.1 and Hyper-V to see where its strengths and weaknesses lie, even to the point of yanking out the sata cable from my drive when running proxmox to see how it would cope with a drive going completely offline on it.
While KVM is not the fastest out of the box (comparing various benchmarking utilities), each release has seen it improve performance and enhance features so I have confidence it will continue to improve over time and offer a real alternative to users and businesses looking for a cost effective supported virtualisation platform. This of course leads me to the reason I've been performing these tests. At work our current servers are starting to show their age, so I proposed getting a rather juicy new server so as to offload as much as possible, refreshing the existing servers as additional nodes and setting up a cluster so as to take over the world and enslave all humanity.... Management wasn't too keen on taking over the world, but liked the idea of re-using existing hardware.
Configuring the server is the easy part, the storage part is however another issue. The options available to us are either local storage (which while appealing from a cost perspective, means no live migration, limited upgrade paths and capacity), a SAN (Dell MD3200 series) which while great from a 'looks good in a rack, lots of lights perspective, is pricey given my budget constraints (and the TCO element of having to replace every few years) or finally something a bit more open source such as GlusterFS which offers high availability and scalability..
So for the past 3 weeks I have been trying to get gluster and proxmox to play nice so as to test how it would work and demonstrate that is a feasible alternative to spending thousands on a SAN array.. Unfortunately my experiences so far have been unsuccessful at getting the two to work together. I can easily enough a gluster cluster working using Debian 7.0.2 using the administration guide documentation, I can on a fresh debian installation install the gluster client and mount a gluster volume with no issues, however attempting the same on proxmox leads to a failed mount.. I have probably rebuild a gluster array 3-4 times now, spend hours searching forums and I feel I am no closer to a solution, and given nobody else is having the same issue I can only assume I am doing something really silly and obviously wrong.
My gluster servers are sitting within a virtual environment setup in a replica as per the gluster file system administration guide (volume name is test-volume) replicating on gluster1 and gluster2 ( both resolvable to each other via DNS)... As I mentioned earlier I can install the gluster client and mount the volume successfully with no issues on a fresh vm running debian .. The same command line on proxmox however generates an error.
Now for the details..
GlusterFS server version 3.2.7 built on Sep 28 2013 ( I have installed glusterFS on both debian 7.0.2 32 and 64bit flavours with the same behavior)
pve version
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
Attempting to mount glusterfs at the commandline:
root@proxmox:/mnt# mount -t glusterfs gluster1:/test-volume /mnt/gluster/
Mount failed. Please check the log file for more details.
Log file reveals:
[2013-10-27 08:56:01.195545] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.0 (/usr/sbin/glusterfs --volfile-id=/test-volume --volfile-server=gluster1 /mnt/gluster/)
[2013-10-27 08:56:01.226719] I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled
[2013-10-27 08:56:01.226749] I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread
[2013-10-27 08:56:01.228177] E [glusterfsd-mgmt.c:1655:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/test-volume)
[2013-10-27 08:56:01.228326] W [glusterfsd.c:1002:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0xcd) [0x7f720981351d] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0xa4) [0x7f7209813194] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3e9) [0x7f7209ece4f9]))) 0-: received signum (0), shutting down
[2013-10-27 08:56:01.228343] I [fuse-bridge.c:5217:fini] 0-fuse: Unmounting '/mnt/gluster/'.
Attempting to mount glusterFS from within the proxmox GUI does not work either. Clicking on the volume dropdown shows an update icon for a brief moment before showing no list options.
Hopefully someone here with more experience with Gluster can offer some insights in what aspect I am obviously doing wrong and guide me in the right direction. My ideal goal is to setup the gluster client to use more than one node for high availablility but given I haven't even been able to connect to a single node it's kind of stopped me in my tracks of testing and progressing further...
Thank you in advance.
Regards,
Richard