ceph bluestore

  1. CEPH cluster planning

    Hi people! I'm planning a CEPH cluster which will go in production at some point but first will serve as a testing setup. We need 125TB usable storage initially, with a cap of about 2PB. The cluster will feed 10 intensive users initially, up to 100 later on. The loads are generally read heavy...
  2. grin

    [pve6] ceph lumi to nautilus guide problem: Required devices (block and data) not present for bluest

    # ceph-volume simple scan stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument Running...
  3. Proxmox Ceph OSD Partition Created With Only 10GB

    How do you define the Ceph OSD Disk Partition Size? It always creates with only 10 GB usable space. Disk size = 3.9 TB Partition size = 3.7 TB Using *ceph-disk prepare* and *ceph-disk activate* (See below) OSD created but only with 10 GB, not 3.7 TB Commands Used root@proxmox:~#...
  4. PVE WEB GUI commnucation failure(0) when list CEPH Storage

    Hi there... I have 2 PVE nodes and 5 servers as CEPH Storage, also building under PVE Servers. So I have two cluster: 1 cluster with 2 PVE nodes, named PROXMOX01 and PROXMOX02. * PROXMOX01 runs proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve) pve-manager: 5.3-11 (running version...
  5. [SOLVED] Ceph missing more that 50% of storage capacity

    I have 3 nodes with 2 x 1TB HDD and 2 x 256G SSD's each. I have the following configuration: 1 SSD is used as system drive (LVM partitioned so bout a third is used for the system partition and the rest is used in 2 partitions for the 2 x HDD's WALs. The 2 x HDD are in a pool (the default...
  6. Replace the drives in zfsraid1

    Hi guys I have a weird situtation tought you could help me I have a cluster of three nodes running proxmox + ceph. Ive installed the os (+ceph) on 2 x usb drive as zfs raid1, now I have high I/O wait on the CPU because the usbs are slow. I added 2 x SAS 15K and I’m thinking if its possible to...
  7. Question about Ceph Bluestore Inline compression setup

    Hello folks! I've been working on a Ceph cluster for a few months now, and finally getting it to a point where we can put it into production. We're looking at possibly using an all flash storage system and I'd like to play around with using the inline compression feature with Bluestore. Now...
  8. Using partition for OSD and WAL/DB

    Hello! Trying to set up brand new proxmox/ceph cluster. Got few problems: 1. Would it make sense to use SSD for wal/db? All osds are using HDDs, so I believe I'd benefit from using SSD for that. 2. Is it possible to use a partition instead of the whole drive to use for OSD while using a...
  9. [Ceph] unable to run OSDs

    My apologies in advance for the length of this post! During a new hardware install, our Ceph node/server is: Dell PowerEdge R7415: 1x AMD EPYC 7251 8-Core Processor 128GB RAM HBA330 disk controller (LSI/Broadcom SAS3008, running FW 15.17.09.06 in IT mode) 4x Toshiba THNSF8200CCS 200GB...
  10. Ceph low performance (especially 4k)

    Hello, We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using. So is there any way we can improve with configuration changes...
  11. small 3 node ceph blustore: how use NVME? config recommendations?

    Hi folks, given are 3 nodes: each node 10 GB network each node 8 enterprise spinners 4TB each node 1 enterprise nvme 1TB each node 64 GB RAM each node 4 Core cpu -> 8 threads up to 3.2 GHz pveperf of cpu: CPU BOGOMIPS: 47999.28 REGEX/SECOND: 2721240 each node latest proxmox of course...
  12. ceph bluestore much slower than glusterfs

    Hello, I just want to create brand new proxmox cluster. On some older cluster I used glusterfs, now I have some time and I try to compare glusterfs vs new ceph (PVE 5.2). on my lab I have 3 VM (in nested env) with ssd storage. iperf show between 6 to 11 gbps, latency is about 0.1ms I make one...
  13. Ceph: How to specifiy DB device for Bluestore OSD

    How to specify DB device (not WAL device) for Bluestore OSD? The Proxmox documentation of pveceph (pve.proxmox.com/pve-docs/chapter-pveceph.html) clearly shows how to specify a WAL device, but not a DB device. pveceph createosd /dev/sdn -wal_dev /dev/sdb Having used this method, within the...
  14. [SOLVED] Ceph OSD change DB Disk

    Hi, I've had issues when I put in new journal disks and wanted to move existing disks from one journal disk to the new ones. The issues where, I set the osd into Out mode, then Stopped the OSD, and destroyed it. Recreating the OSD with the new DB device make the OSD never to show up! This is a...
  15. Increase Ceph recovery speed

    I'm in the middle of migrating my current osd's to Bluestore but the recovery speed is quite low (5600kb/s ~10 objects/s). Is there a way to increase the speed? I currently have no virtual machines running on the cluster so performance doesn't matter at the moment. Only the recovery is running.
  16. Compression or deduplication in Ceph

    I am currently running a proxmox 5.0 beta server with ceph (luminous) storage. I am trying to reduce the size of my ceph pools as I am running low on space. Does ceph have some kind of option to use compression or deduplication to reduce the size of the pool on disk?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!