Hi,
hast someone tried Sheepdog so far?
http://www.osrg.net/sheepdog/
It sounds very interesting, it would be nice if it were usable with Proxmox.
What are your opinion about that?
Many times to far I got failure Messages in my Cluster which are not critical.
These failure Messages appearing than almost everywhere instead of the wanted content.
Failure Messages should be displayed somewhere in addition to the content and not instead.
For example I created an storage...
With which Network and Disk settings you got the best performance?
I could not find good results about a comparisment of different possible choices so I would like to ask you about your experiences.
I would choose virtio network and disk drivers and the e1000 nic so far. But I want to clarify...
when I tried to scan an iscsi target on the master after installing the package open-iscsi and starting the open-scsi service there apeared the failure that i dont have open-iscsi intalled.
Than I tried to add the target by myserlf ant hit save.
After this always following internala failure...
What to do when the Cluster Master failed?
I saw you could make one node from a slave to a master, but what ist happening when the old master comes back zu service?
It is possible to put the master in in automatic failover environment ?
I could find a place to report bugs, so I try it here.
On three different nodes running Proxmox 1.5 and 2.3.32 kernel were following happening:
After installing with this howto:
http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny
I enabled the bridge, he showed me the right changes...
How to add a LVM volume group on a slave cluster node?
I have a 3 Node Cluster Setup
-N1 master
-N2 slave
-N3 slave
N1-N2 have an shared DRBD device running in primary/primary mode as a LVM volume group.
On the Master (N1) I could add the LVM VG easily and it works fine even kvm live migration...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.