Hello,
We have a 6 PowerEdge R730 cluster running ceph with 8 OSDs on each node and KVM VMs.
I've started upgrading to v4.3 yesterday and upon any node reboot, all nodes in the cluster go down.
I get notified by the idrac that the watchdog timer expired.
I've tried increasing the timeout in...
Hello ScOut3R,
I've used SSD journals without CacheCade and it didn't help with recovery.
I think this is due to default I/O usage of the cluster even without recovery.
On the new nodes with CacheCade I see no performance increase using SSD for journal.
Inktank does not recommend using RAID1...
Hello,
So the time has arrived to upgrade our ceph cluster because of degrading I/O performance.
I believe we've stretched our 6 OSDs quite enough :)
Huge problem!
When I added a new OSD in the mix the cluster immediately started doing back fill and recovery on placement groups in order to...
Hello again,
Just tried this and it worked:
root@pm02:~# aptitude purge multipath-tools
The following packages will be REMOVED:
kpartx{u} multipath-tools{p}
0 packages upgraded, 0 newly installed, 2 to remove and 9 not upgraded.
Need to get 0 B of archives. After unpacking 745 kB will be...
Hello everyone,
I'm trying to set up ceph stoare on my 4 node cluster.
The problem is I cannot use the disks.
In the GUI they are marked as in use for LVM.
In CLI it just uotputs this:
root@pm02:~# pveceph createosd /dev/sdk -journal_dev /dev/sdj
device '/dev/sdk' is in use
root@pm02:~#
I've...
Re: Proxmox 3.2 with 3-nodes cluster and ceph server: running openvz/containers possi
@Ruben
I ran across the same issue. My solution is simple:
4 client nodes with kernel 3.x (only supports KVM)
3 ceph nodes.
4 KVM instances that have RBD storage which run proxmox.
I've clustered the 4 KVM...
Hello dietmar,
Any new developments on this topic?
I tested thin provisioning on LVM shared storage on an iSCSI LUN and it seems to work.
Maybe I'm missing something.
Can you explain in a few details why this is a blocker?
I wouldn't want to hit the same dead end.
Best regards,
Marius
Hello,
I'm not familiar with Hetzner, but it looks like you have 5.9.76.16/29 either directly connected or routed to 5.9.61.70.
On the VM's try configuring the IP with netmask 255.255.255.248 and gateway 5.9.76.17.
Most definately the subnet is routed and gateway won't work, so you need to...
Hi didraro,
Please execute "ip a" on the physical server and on 2 of the different VM's
Please replace the first 3 bits of your public IP addresses with something like 10.1.2.x for security reasons.
Sounds like a subnet mask miss-configuration.
Hello,
http://www.cacti.net/ or http://cactiez.cactiusers.org/ for easy setup
Cacti has a very active community and it offers many possibilities for monitoring. I personally use Percona templates for all my servers and VM's http://www.percona.com/doc/percona-monitoring-plugins/1.1/
These are a...
Hello guys,
I'm having some issues with a fresh cluster.
I've set up a 3 node cluster using supermicro servers and iSCSI/LVM storage.
Fencing seems to be working fine.
KVM Live migration works fine untill I configure HA on the VM's. Something is wrong and I cannot seem to locate the problem...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.