There are afaik multiple caches involved.
The cache on your VM's OS
The Cache on the Ceph-Client (in this case librdb - see http://docs.ceph.com/docs/hammer/rbd/rbd-config-ref/)
The Cache of your OSD-Daemon
The Cache of Your Raid-Controller
The Cache of your physical Drives
ps.: there is a...
No idea if that works on Proxmox 4 or not. Its a guide for Proxmox2 and i never used the guide nor the Version. It might work, it might not work. Can't help you there.
If i were in your shoes, it'd do the following based on the 7 Nodes you have present:
Migrate your VM's away from 3 off those...
so, this is "working as intended" then.
Its a usecase, that never comes up in our production use.
It just came up on my private nodes, so i thought i mention it.
Thanks @spirit
Then Cache-Tiering is probably not for you. And neither will be Erasure coding.
Cache-tiering you use when you have lots of "Slow" and a finite amount of "Fast" storage media and you want to optimize the usage of said media.
Basically data that is considered "Hot" (has been used recently -...
Regarding your future plans:
Disclaimer: never worked with infiniband (only 10/40 G Gbase-T)
Especially when using Ceph consider using a Public and a Cluster Ceph network in your ceph.conf and then using a openvswitch based bond in balance-tcp over all your dedicated "ceph network links". Why...
If you have battery backups for em, sure, go for it. If no battery backups, be sure to turn the cache off. Should be do-able on every Areca Raid-Controller. At least i can do it on my old Areca arc 1231-Mil 12-ports i have at home. and the 16-Port ones we have at work.
If you do not have the...
On OVH you afaik create a virtual Mac on their backend and assign it your Failover-Ip.
Then in proxmox you create on your VM a new vNic, that utilizes your Hosts vmbrx. You then assign it your vMac from OVH control panel.
Thats about it.
For more, check the ovh Wiki/FAQ on this. It used to be...
Well, kinda, i'm assuming e100 has less then 32 GB/ram per node (as thats benchmark examples from my 3-node Cluster with <= 32 GB). On top of that the read values is what i'd expect with 7 HDD based OSD's on 3 nodes. where the single OSD benchmarks are what he posted.
sure, you can probably...
Based on your rados bench results, your issue is not the ceph subsystem. (altho adjusting primary-affinity as per post #11 will give you some more performance in that regard). It is what i'd expect to see based on your described config.
I'd look at anything that is not directly ceph related...
What type of pool we talking about ? Replicated with size = X or Erasure coded ?
have you tried a deep-scrub to eventually find more issues?
ceph osd deep-scrub osd.x
Which osd's is said pg sitting on.
ceph pg map 1.6c
Have you tried marking the acting OSD as down and out
ceph osd down osd.x...
That is a bad idea btw. No way for ceph to find errors on its own (they do happen mostly during unscheduled restarts, or when your drives have a bad sector, that sometimes does not get recognized by smart as such, but also Ram-errors and/or Controller - onboard, hba or raid - related issues...
I'd se
I'd drop the primary-affinities of osd's 12 ,13,18 (those below 140 MB/s write read) to 0.0
Then run the benchmark again. You should see slight improvement. (but i doubt its the major culprit)
global]
auth client required = none
auth cluster required = none
auth...
totally overlooked this. Have you tried to do a synthetic benchmark of your pool ? As in from outside your VM, to see if its a ceph issue or a VM issue ?
Example:
rados bench -p P12__HDD_OSD_REPLICATED_R3 450 write --no-cleanup
rados bench -p P12__HDD_OSD_REPLICATED_R3 450 seq
rados -p...
Afaik (don't nail me on this, i have never tested this), the proxmox ceph GUI parses a mon sitting on a node inside your proxmox-cluster. So you'd have to install a infernalis based mon on at least one Proxmox-node, connected to your external ceph-cluster
i'd check wether a standard "Hetzner server" has the same ports listening on, then your "hundreds of working servers". Then verify that all those ports on your hetzner Server can actually be reached from the outside world.
That would tell you if hetzner does anything to block your requests.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.