After poking around with obviously some bugs (or at least different worklflow than normal ceph) in proxmox like creating osd's which don't appear in the gui ... I placed the whole CEPH to my 10GB Network ...which means also the public part.
The various books of ceph I read and also the proxmox...
I replaced my Switch and get much higher bandwith:
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_server-1_12493
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0...
No I'm curious...I switched on all the Caches of my Raid Controller and plugged in a 1 G Switch for the public network....look what I got::confused::confused::cool:
rados bench -p VM 10 rand --no-cleanup
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)...
@Alvin
I know that diagram but actually I do not know what you try to tell me ?
@udo
Yes, I have enabled the performance mode
@all
Here is an output of a 3 Tier Ceph cluster without a separate clusternetwork, only 1G Nics, consumer grade HDD's (1 OSD per Node) and each server is on a...
I have to use raid-0 but disabled the cache. My CPU info in my 1st post is misleading...actually each server has 2 CPU's and each CPU hast 8 cores, with HT that 32 cores per server....should be more than enough. Kernel stands on performance regarding the CPU's.
rados bench -p CePH-VM 10...
Hello
I've some performance issues with my Proxmox Cluster.
I use 3* DELL R620 Server with 2 * 2.6 GHZ XEONs and each server has its own dual port 10G Mellanox 3rd gen NIC for CEPH with a MTU of 9000, the 3 servers are directly connected (no 10G Switch for ceph). The setup is simple (used as a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.