ceph-performance and latency

Hello Udo,

I want to purchase a sas/sata card to test instead of our older 3ware/lsi cards.

can you give a suggestion or 2 for areca controller series or models? I see quite a few different ones at the areca homepage..

also is there a decent gui for areca ?

Hi Rob,
sorry for delay. I have tested an ARC1680 with all disks in path through - the write cache is used.

If I had to buy an new raid-controller I would use the newer line 18xx.
But for ceph you need only an sas-extender and no raid-controller (the price difference is enough to buy an better SSD for caching).

Areca has also an sas-extender-card but unfortunality there are driver-problems with linux ( I tested one card without luck).

Udo
 
with the arc1680 (and arc1220) the cache is also used for path through drives.
Are You sure? In pass-thru mode all commands are (almost) directly forwarded from OS to physical disk. How they (or some subset) can be cached? Controller is (almost) just a PHY. Maybe You mean single-drive volumes?
Controller's manual also not shows such features. Just enabling-disabling disk cache and volume cache, but You can control drive's cache on any SAS/SATA controller. Volume cache can be controlled on any controller which have it onboard.
Also it's 1-st generation 3Gbps SAS. Are you kidding? Use it now? When 12-Gbps controllers are avilable?
 
I've tried to add xfs mount options to my ceph.conf as mentioned above (latest PVE 3.3)

Code:
[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring
         osd crush update on start = false
         osd mount options xfs = "rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M"
         osd_op_threads = 4
         osd_disk_threads = 4

After that changes I remounted all the OSDs but nothing had changed

Code:
# cat /proc/mounts
...
/dev/sdc1 /var/lib/ceph/osd/ceph-0 xfs rw,noatime,attr2,delaylog,noquota 0 0
/dev/sdd1 /var/lib/ceph/osd/ceph-1 xfs rw,noatime,attr2,delaylog,noquota 0 0
/dev/sde1 /var/lib/ceph/osd/ceph-2 xfs rw,noatime,attr2,delaylog,noquota 0 0
/dev/sdf1 /var/lib/ceph/osd/ceph-3 xfs rw,noatime,attr2,delaylog,noquota 0 0

Have tried to restart ceph on all the nodes - no luck

Have I missed something or mount behavior has been changed in the latest CEPH?

Thanks in advance
 
I've tried to add xfs mount options to my ceph.conf as mentioned above (latest PVE 3.3)

Code:
[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring
         osd crush update on start = false
         osd mount options xfs = "rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M"
         osd_op_threads = 4
         osd_disk_threads = 4

After that changes I remounted all the OSDs but nothing had changed
Hi,
how do you remounted? The mount options in ceph.conf should work, when ceph mount the osd-volumes during start (with some udev-magic).
I don't know if ceph-pve change there something.
E.G. there are no fstab-entrys for the osd-volumes.

But as workaround you can manually remount with
Code:
mount -o remount,rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M /dev/sdc1 /var/lib/ceph/osd/ceph-0
Udo
 
Yes, i tried to remount with exactly the same command:


mount -o remount,rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M /dev/sdc1 /var/lib/ceph/osd/ceph-0


# cat /proc/mounts
....
/dev/sdc1 /var/lib/ceph/osd/ceph-4 xfs rw,noatime,attr2,delaylog,noquota 0 0
/dev/sdd1 /var/lib/ceph/osd/ceph-5 xfs rw,noatime,attr2,delaylog,noquota 0 0
/dev/sde1 /var/lib/ceph/osd/ceph-6 xfs rw,noatime,attr2,delaylog,noquota 0 0
/dev/sdf1 /var/lib/ceph/osd/ceph-7 xfs rw,noatime,attr2,delaylog,noquota 0 0




All the OSDs are SSD drives 512Gb (486Gb) each. Could it be the issue?
 
if your restart the ceph osd with /etc/init.d/ceph osd restart,


it don't umount /var/lib/ceph/osd/ceph-0 , so the new mount options are not applied.




you can do


/etc/init.d/ceph osd stop
umount /var/lib/ceph/osd/ceph-*
/etc/init.d/ceph osd start




or do a remount manually


mount -o remount,....
 
Hi there, this is my first post. I would like to have an opinion concerning the performance of my setup. I have a Dell C6100 with three nodes. On every node I have 4 sata 1 tb hdd (osd), 1 sata 1 tb (proxmox setup), 1 ssd intel 3500 120 gb (journal), 48 gb ram, 2 x E5620 hex core, intel x520-da2 network card, no hd controller. For the network proxmox is working on a 1gbit network, configured with a round robin bond on 2 cards (iperf 1,88 Gb/s) while ceph has a separate 10Gb fibre network using a Quanta LB6M (iperf 9,41 Gb/s). I created a Pool with 2 replicas and 512 placement groups. Well, using "rados bench -p test 10 write --no-cleanup" I have something like 200 Mb sec average bandwith. And testing the reading speed I have about 1600-1700 Mb sec. Are these performances consistent? I don't have a precise idea but comparing performance with the same VM with local data I can observe, in some elaborations, that Ceph is working at the double of time. Working on the setup (inode64 or other tricks) can significantly change the performance of this hardware? Thanks in advance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!