Hi,
hast someone tried Sheepdog so far?
http://www.osrg.net/sheepdog/
It sounds very interesting, it would be nice if it were usable with Proxmox.
What are your opinion about that?
I am looking forward for this release, besides some small issues proxmox 1.5 is already very good.
Is there a chance that Proxmox 2.x will be released in the next weeks? Or do we have to wait months? A news abount the progress would be nice :-)
Is this still true with the proxmox kernel 2.6.32 ? Do I have to compile a new driver for this hardware?
I am not sure but I think It happened on my host that the Network becomes flaky under load.
Many times to far I got failure Messages in my Cluster which are not critical.
These failure Messages appearing than almost everywhere instead of the wanted content.
Failure Messages should be displayed somewhere in addition to the content and not instead.
For example I created an storage...
I did this before doing anything in the webinterface. Later I noticed I have to restart the pve daemon that proxmox recognizes the open-iscsi package is installed.
Now I have the problem that I have a failure where people say that the iscsi packages are to old for my kernel.
But I stop now my...
With which Network and Disk settings you got the best performance?
I could not find good results about a comparisment of different possible choices so I would like to ask you about your experiences.
I would choose virtio network and disk drivers and the e1000 nic so far. But I want to clarify...
I tried following:
node:~# /etc/init.d/pvedaemon restart
and now got this failure when clicking on the storage link in the webgui:
[2788]ERR: 24: Error in Perl code: 500 read timeout
when I tried to scan an iscsi target on the master after installing the package open-iscsi and starting the open-scsi service there apeared the failure that i dont have open-iscsi intalled.
Than I tried to add the target by myserlf ant hit save.
After this always following internala failure...
How to remove the old master?
Do I have to execute following command on every slave ?
pveca -d MASTERIDthen executing
pveca -mon the new master?
Would be nice to see this issue explained in the wiki at the cluster page.
This would be a nice feature I think. When you can have multiple pairs in your Cluster with drbd.
When then you can create this drbd volumes with the Web Gui it would be perfekt!
Hopefully we can see somethink like this on the roadmap sometime.
What to do when the Cluster Master failed?
I saw you could make one node from a slave to a master, but what ist happening when the old master comes back zu service?
It is possible to put the master in in automatic failover environment ?
thank you for your reply.
So I have to configure the drbd volumes to be accessible by all nodes especially the master via iScsi/Nfs and then adding a lvm volume group?
It would add another complexity level to the storage.
Right now I have 4HDDs->Raid10->LVM->DRBD->LVM->PVEs.
I have no...
It happened on three Server, I followed the howto step by step. Than I let the Webgui configure the bridge and pressed reboot.After a reboot the gateway line was missing.
before the reboot this "show changes" function showed the right result with the gateway line included.
I could find a place to report bugs, so I try it here.
On three different nodes running Proxmox 1.5 and 2.3.32 kernel were following happening:
After installing with this howto:
http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny
I enabled the bridge, he showed me the right changes...
How to add a LVM volume group on a slave cluster node?
I have a 3 Node Cluster Setup
-N1 master
-N2 slave
-N3 slave
N1-N2 have an shared DRBD device running in primary/primary mode as a LVM volume group.
On the Master (N1) I could add the LVM VG easily and it works fine even kvm live migration...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.