bluestore

  1. G

    Ceph OSD crash loop, RocksDB corruption

    Hi, yesterday one OSD went down and dropped out of my cluster, systemd stopped the service after it "crashed" 4 times. I tried restarting the OSD manually, but it continues to crash immediately, the OSD looks effectively dead. Here's the first ceph crash info (the later ones look the same): {...
  2. A

    Ceph DB/WAL SSD Configuration Validation

    Hello, I need some guidance on the configuration for the performance of rbd performance. I've followed instructions and gone through the documentation multiple times but can't get the disk performance as high as I expect them to be. My current setup is as follows: 3 similarly configured...
  3. jsterr

    OSDs are filestore at creation, although not specified

    I reinstalled one of the three proxmox ceph nodes with new name, new ip. I removed all lvm data, wiped fs of the old disks with: dmsetup remove_all wipefs -af /dev/sda ceph-volume lvm zap /dev/sda Now when I create osds via gui or cli they are always filestore and I dont get it, default should...
  4. S

    [SOLVED] Ceph bluestore WAL and DB size – WAL on external SSD

    Hi All, I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks 7x 6TB 7200 Enterprise SAS HDD 2x 3TB Enterprise SAS SSD 2x 400GB Enterprise SATA SSD This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...
  5. K

    Unable to create Ceph OSD

    Hello there, I recently upgraded from proxmox 5 to 6 as well as ceph luminous to nautilus. I wanted to go through and re-create the osds I have in my cluster. I ran into an issue with the second osd I wanted to convert (the first went fine). Here's what I get after I zap the disk: pveceph...
  6. R

    [SOLVED] Proxmox Ceph implementation seems to be broken (OSD Creation)

    Preface: I have a hybrid Ceph environment using 16 SATA spinners and 2 Intel Optane NVMe PCIe cards (intended for DB and WAL). Because of enumeration issues on reboot, the NVMe cards can flip their /dev/{names}. This will cause a full cluster re balance if the /dev/{names} flip. The...
  7. T

    [SOLVED] upgrade ceph SSD's

    I have a hyperconverged proxmox/ceph cluster of 5 nodes running the latest proxmox/ceph (bluestore) with 6 480GB SSD's each (totalling 30 OSD's) and I'm starting to run slow on storage. Is it possible and would it be wise to replace some (if not all) SSD's with bigger ones? Or if I added a node...
  8. S

    Backups stop whilst running

    When using bluestore osds, the backup data stops flowing, the task shows running, but no more data moves. I've swapped all the osds back to file and the backups work perfectly. Also backups stop if there is any osd using bluestore. Syslog: only errors reported: Sep 24 22:58:35 proxmox3...
  9. elurex

    Ceph blustore over RDMA performance gain

    I want to share following testing with you 4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD. OSD: st6000nm0034 block.db & block.wal device: Samsung sm961 512GB NIC: Mellanox Connectx3 VPI dual port 40 Gbps Switch: Mellanox sx6036T Network: IPoIB separated public network &...
  10. D

    Ceph - Bluestore - Crash - Compressed Erasure Coded Pool

    We initially tried this with Ceph 12.2.4 and subsequently re-created the problem with 12.2.5. Using 'lz4' compression on a Ceph Luminous erasure coded pool causes OSD processes to crash. Changing the compressor to snappy results in the OSD being stable, when the crashed OSD starts thereafter...
  11. F

    Ceph Luminous with Bluestore - slow VM read

    Hi everyone, recently we installed proxmox with Ceph Luminous and Bluestore on our brand new cluster and we experiencing problem with slow reads inside VMs. We tried different settings in proxmox VM but the read speed is still the same - around 20-40 MB/s. Here is our hardware configuration...
  12. M

    Proxmox 5.1 bluestore compression

    Hallo an die Community, eine Frage zu Ceph und Proxmox 5.1: Wie aktiviere ich die "compression" im Ceph Bluestore richtig? Mit den Befehlen von Ceph z.B.:ceph osd pool set test compression_mode aggressive oder gibt es speziell für Proxmox eine Option mit pveceph createpool? Hintergrund: Ich...
  13. N

    (Only) Windows bad diskperformance on PVE 5/Ceph Bluestore

    I am running a PVE 5-Cluster with Ceph bluestore OSDs. They are only HDD OSDs and connected over 2x1GBit bonds. Don't get me wrong here, it isn't productive yet and I don't expect any fancy performance out of this setup. I am highly impressed with the performance of it under Linux-KVMs and when...
  14. G

    Migrating to bluestore now ? or later ?

    Hi Folks, shall i migrate from filestore to bluestore following this article ? http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/ or wait for ceph 12.2.x ? currenty pve has 12.1.2 luminous rc ... but how long to wait ? any release plans for 12.2 ? regards
  15. elurex

    Proxmox VE 5.0 and ceph bluestore

    Is PVE 5.0 compatible with Ceph Bluestore? If I manually creates osd ceph-disk prepare --bluestore /dev/sda --block.wal /dev/sdb --block.db /dev/sdb
  16. G

    ceph performance 4node all NVMe 56GBit Ethernet

    Hi Folks, Whis performanche shoud i expect for this cluster ? are my settings ok ? 4 nodes: system: Supermicro 2028U-TN24R4T+ 2 port Mellanox connect x3pro 56Gbit 4 port intel 10GigE memory: 768 GBytes CPU DUAL Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz ceph: 28 osds 24 Intel Nvme 2000GB...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!