@SamTzu - Yes, the download link was posted above, but here it is in case you missed it.
http://download.proxmox.com/debian/dists/wheezy/pve-no-subscription/binary-amd64/glusterfs-server_3.4.1-1_amd64.deb
Just do a wget on that and install...
# wget...
I have found the latest debian packages, but they are 3.4.2-1, I can't find the version ProxMox is using. Do you think it would be a problem just dpkg -i installing the latest common, client and server?
I've read that the latest version of ProxMox supports GlusterFS and I'd like to setup my /backup partition (to start with, and possible my /var/lib/vz partition if it works well) as GlusterFS filesystems.
I have 10 nodes, each node has a 2TB drive mounted on /backup. I'd like to make them all...
As I mentioned originally, I migrated them off to the other nodes and kept moving them around until I had everything reformatted and reinstalled. It worked just fine (you can migrate to nodes that aren't in your cluster) - it just took a very long time to complete and I was hoping not to have...
No, my old cluster is gone (I have physically reformatted all of the boxes).
So... After a little experimenting and reading thru the PVE libraries I found that the VPS configuration file *IS* stored in the backup archive (as I had expected) in /etc/vzdump/vps.conf
So I am going thru each...
Ok, first - before I get chewed out for doing this, I know what I did was wrong and I will not doing it again.. At least by accident.
So what I did was rebuilt all of my machines destroying my old cluster, migrating machines to new hosts as needed to reformat and reinstall fresh on the other...
Ok, I think I was able to accomplish what I am looking to do, I had to patch /usr/share/perl5/PVE/DAB.pm as follows:
*** DAB.pm.orig 2013-07-22 00:43:07.000000000 -0400
--- DAB.pm 2013-12-27 07:30:41.000000000 -0500
***************
*** 377,383 ****
if $arch !~ m/^(i386|amd64)$/...
Is there some way to have DAB using the testing version of Debian while building? I understand this may come with some inherit bugs...
Right now I am building Wheezy and copying my own /etc/rc.local startup script that updates the system to testing on first boot and
then removes itself. For 1...
I've been having this same problem for years now without a solution, now I understand I need to increase my snapshot size. What is the default snapshot size? What is the largest I can make it with the default ProxMox installation and/or is there some way of seeing how much space is left for...
Re: Proxmox 2.3 Networking Problem - OpenVZ Container looses networking after some pe
Turns out just as I had expected... There WAS another device on the network using my IP address, it just wasn't a server so it was harder to find. It was my smart gigabit ethernet switch....
Oi...
I was running Proxmox 2.1 and I have a container several containers running Minecraft server, but one of them in particular looses connectivity to the outside world periodically (often enough to be troublesome, let's say within 1 hour of it booting).
It's driving me nuts.. My first thought was...
I'm trying to install Solaris 11 as a KVM on our ProxMox server. I've reached the point in the installation where it says press F2 to continue, if you do not have an F2 key or can't press it - press ESC and use the arrow keys to select.
I've tried F2 and ESC and neither one of those seem to...
I had 3 nodes up and running just fine. I added two new nodes, which seemed
to work just fine. I migrated some servers two the 4th and 5th nodes, every
thing ran fine... For about 2 hours, then the 5th node started having problems
with it's hard drive. I tried migrating the server, but it...
No, actually, I never got the speeds any higher on this, I just ran with it and chalked it up to purchasing the incorrect drive for this. The drives I purchased were the Crucial M4 512GB drive. I contacted Crucial about the problem and the consensus was that it's not designed for server...
Well, if e100 is doing it I must concede... However, I fail to understand how this could possibly yield good long-term results... I mean, sure - technically you COULD do RAID-6 on a single drive if you broke it up into 6 separate partitions and then treated each partition as a separate drive...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.