Sadly no one could help. :-(
Switched back to classical linux vlan and bridge-tools.
Now everything is working as expected.
Thought I'd go with sth more comfortable and fresh, but just too many issues arose.
Hello everyone,
for HA of some services I'm trying to setup two pfSense-Firewalls on two different Hosts
which are connected via a vRack at OVH.
The network is configured on top of OpenVSwitch with several VLANs which are working great between the cluster-nodes, except for the CARP of the...
Re: Ceph - Bad performance with small IO
Thanks phildefer for the link and clarifying the rbd-cache question.
Changing the cache-setting to writeback and tuning debug and sharding really helped a lot:
Thanks to everyone! :)
@spirit or maybe someone else can answer:
To further enhance...
Re: Ceph - Bad performance with small IO
Yeah well, that sure increases performance.
The hosts are connected to UPSs, but how secure is it?
Does this use the rbd-cache or RAM of the host?
Probably the same but just to clarify.
Hello onedread,
why don't you go with just a simple NAS-Server with openmediavault and a AMD AM1.
For example:
http://geizhals.de/at/asus-am1i-a-90mb0ia0-m0eay0-a1080718.html?hloc=de
http://geizhals.de/at/?cat=cpuamdam1&xf=5_AES#xf_top
All the features except 6 are possible with omv and the...
Re: Ceph - Bad performance with small IO
Really some good insights in this thread: http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-August/042498.html
But disabling cephx left me with an unusable cluster.
Had to revert the settings to get back to a working state.
Do I need to...
Re: Ceph - Bad performance with small IO
I reverted the settings to default and there is no big difference.
The Min/Max sync intervals were an experiment. Read this on the ceph-users mailing list,
but obviously it didn't help much. It was for a much bigger cluster.
The strange thing really is...
Re: Ceph - Bad performance with small IO
Just finished defragmenting the OSDs and remounting them with the new mount options.
Now the fragmentation factor is at maximum 0.34% over all OSDs.
Also injected the new setting filestore_xfs_extsize.
But still no improvement :-(
Just to make sure...
Re: Ceph - Bad performance with small IO
The client is also giant and there was some improvement, but still the performance of small io is really bad.
For example an ATTO Benchmark in a Windows guest:
It seems to hit a limitation?
I've been reading Sebastian Han's blog extensively - lots of...
Re: Ceph - Bad performance with small IO
Hi Udo,
thanks for your fast reply!
I use ceph giant - version 0.87.1.
The Journals are symlinked to LVM-Volumes on Crucial M500 in RAID1.
Is this then still file-based?
The read ahead cache is already set to a higher value. If you mean:
blockdev...
Hello everyone,
first of all I want to say thank you to each and everyone in this community!
I've been a long time reader ( and user of pve ) and could get so much valuable information from this forum!
Right now the deployment of the Ceph Cluster gives me some trouble.
We were using DRBD but...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.