Search results for query: bcache

  1. A

    Ceph BlueStore - Not always faster than FileStore

    Massive post! Should be sent to the ceph users mailingslist. Also, I really am aching to enable bcache to see those write improvements! PS. Mind telling us how you disabled the 4mb feature? DS. It didn't take too long for us to realise that bcache comes from a time when SSDs were fast at...
  2. A

    PVE Ceph- should I use bcache?

    Other than a ceph cache tier I havent really seen any improvement to speeds. SO I will def try out bcache!! Thanks so much for this!
  3. D

    PVE Ceph- should I use bcache?

    A comparison of without bcache to with: No bcache, w/100GB NVMe Journal: With 200GB bcache, 20GB NVMe Journal: Higher block size on the VM drives will net much higher sequential write performance (I was seeing numbers over 400MB/s with 16K sectors).
  4. D

    PVE Ceph- should I use bcache?

    FYI, I got the bcache working. Significant improvements in overall cluster write performance (more than double in my VM's CrystalDiskMark bechmarks). Reads were pretty much unaffected. And the load on the nodes is minimal still. The highest CPU usage I've seen is just over 20% on all nodes with...
  5. D

    Ceph BlueStore - Not always faster than FileStore

    @David Herselman , just curious if you see a major performance increase using bcache versus not. I'm in the process right now of building a PoC for our company, just a small cluster of 3 nodes but in production will be 5. I'm seeing 'slow requests blocked' errors every now and then when running...
  6. U

    PVE Ceph- should I use bcache?

    ...ceph cluster). And 16GB for the ceph osd node are much to less. I've not understand how much nodes/OSDs do you have in your PoC. About you bcache question: I don't have experiences with bcache, but I would use ceph as is it. Ceph is completly different to normal raid-storage so every...
  7. D

    PVE Ceph- should I use bcache?

    ...that Filestore gives better overall performance (even with 100GB NVMe journal on both config types). I've read a few articles stating that bcache can improve performance, namely fix the 'slow requests' errors that can occasionally pop up. I currently have the PVE/Ceph cluster as a storage...
  8. Z

    New Installation on 2 HP-Proliant DL380G7 - MSAP2000G3

    ...kernel: [502468.998692] device-mapper: table: 253:6: multipath: error getting device May 15 11:15:50 PREF33-S-PMOX1 kernel: [502468.998801] device-mapper: ioctl: error adding target to table I found this in /var/log/kernel.log I configure this in lvm.conf types = [ "bcache", 253 ] same problem
  9. J

    New Installation on 2 HP-Proliant DL380G7 - MSAP2000G3

    Hallo Zaqen, It's very strange. Are you sure that definition of hosts and hosts mappings in MSA are correct? Maybe they are read only mappings or the volume's setting at MSA does not allow to write on it down? I would suspect configuration of the SAN switches - zoning?. I think that is not the...
  10. Z

    New Installation on 2 HP-Proliant DL380G7 - MSAP2000G3

    i put in lvm.conf in device section : types = [ "bcache", 253 ] Always and always..... error always the same error : root@PREF33-S-PMOX1:/dev# pvcreate /dev/mapper/3600c0ff0001ae601d4d8b05a01000000 Device /dev/mapper/3600c0ff0001ae601d4d8b05a01000000 not found (or ignored by filtering)...
  11. LnxBil

    Proxmox + LVM cache

    Several years ago, I tried bcache and flashcache and both worked fine in write-back and write-through mode. Performance increase was huge. A side effect was, that after each reboot, the cache needs to be synced and the performance was very, very bad until it finished. Besides that, it worked -...
  12. F

    Optimizing proxmox

    ...different that breaks the mould. (ie, in my case the 'mould' is - stock proxmox on machines with HW raid; or debian-SW-raid / sometimes with 'bcache' layer added for boosted performance - then adding proxmox-to-debian after-the-fact -- and this 'just works' so smoothly, that -- the various...
  13. J

    New Installation on 2 HP-Proliant DL380G7 - MSAP2000G3

    ...pvcreate -vvvv /dev/mapper/3600c0ff0001ae601d4d8b05a01000000 If you will find something like "Skipping: Unrecognised LVM device type 253" you should add found magic number to /etc/lvm/lvm.conf (in the devices section): types = [ "bcache", 253 ] Looking at your kernel.log I suppose it is 253.
  14. Alwin

    Calculating Journal Size - ceph

    @Jarek, bcache is a totally different hammer and is not really known to improve ceph performance or simplify the already complex setup. It is also to note that data safety can not be guaranteed with bcache. To spare headache and lessen complexity, I advise against the use of bcache. It is still...
  15. J

    Calculating Journal Size - ceph

    If you like the 'filestore design with journal' performance, you need to setup bcache as mentioned somewhere on the forum. Moving DB + WAL to ssd didn't improve write speed with noticeable factor.
  16. D

    Ceph BlueStore - Not always faster than FileStore

    ...speed notes to: Fail OSDs 8, 9, 10 and 11 and ensure no placement groups are 'inactive', 'unfound' or 'unknown'. Destroy OSDs. Disassemble bcache block devices and destroy. Repartition SSDs (3 x 60GB partitions with 512 byte size sector count (eg: (142606335-16777216+1)*512/1024/1024/1024 =...
  17. D

    Ceph BlueStore - Not always faster than FileStore

    Herewith some of my speed notes and a nice reference: https://dshcherb.github.io/2017/08/12/ceph-bluestore-and-bcache.html Some additional notes: bcache in kernel 4.13 (PVE 5.1) requires a separate cache block device for each underlying block device. This appears to have been different in...
  18. D

    Ceph BlueStore - Not always faster than FileStore

    ...we'll most probably convert hdd OSDs back to FileStore with SSD journalling, as the current setup is really complicated. The nice thing with bcache is that one can detach and reattach caching block devices, so we'll probably leave the HDDs 'formatted' as bcache block devices, so that we can...
  19. D

    Ceph BlueStore - Not always faster than FileStore

    Performance difference when we set 'sequential_cutoff' to zero: CPU breakdown on Monday, FileStore OSDs with SSD journal: CPU breakdown on Friday, BlueStore OSDs with RocksDB on SSD: CPU breakdown on Monday, BlueStore OSDs with RocksDB and bcache on SSD:
  20. D

    PVE 5.1 - Memory leak

    Herewith the promised post about implementing bcache with BlueStore OSDs to regain performance of FileStore OSDs when using SSD journals: https://forum.proxmox.com/threads/ceph-bluestore-not-always-faster-than-filestore.38405/ bcache ultimately performs better than FileStore, as recently...