ceph bluestore

  1. S

    Proxmox 4 nodes cluster and CEPH EC 4+2 with Custom Crush Rule

    Dear Community. We are currently in the process of building a Proxmox cluster with Ceph as the underlying storage. We have 4 nodes, with 4 x 100TB OSDs attached to each node (16 OSDs total) and plan to scale this out by adding another 4 nodes with the same number of OSDs on attached to each...
  2. V

    [SOLVED] ceph cluster rebuild - Import bluestore OSDs from old cluster (bad fsid) - OSD dont start. He only stays in down state

    ceph version 18.2.2 (e9fe820e7fffd1b7cde143a9f77653b73fcec748) reef (stable) Hello everyone, please we need your help to import the bluestore OSDs to the new cluster, because after trying to import, we noticed that the OSDs do not start. They are only in a "down" state and are imported as...
  3. A

    VM not able to migrate in realtime

    Hello Team, We have a simple setup of 3 nodes running proxmox 8.1.4 with under ceph quency 17.2.7 as shared storage. We try to create a HA for our vm . So in case of any node goes down our VM can be live migrated to another node . We try to test this scenario and we intentionally reboot one...
  4. M

    ceph tuning

    First a disclaimer, this is a lab, definitely not a reference design, the point was to do weird things, learn how cephs reacts and then learn how to get myself out of whatever weird scenario I ended up with. I've spent a few days on the forum, seems many of the resolutions were people...
  5. A

    Error for add new OSD on ceph

    I am trying to add new osd without success. Follow the logs and information --------------------------------------------------------------------------------------------------- # pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve) pve-manager: 6.4-15 (running version...
  6. A

    Problem with add new osd in ceph

    Hi, I'm having trouble adding a new osd to an existing ceph. ----------------------------------------------------------------------------------------------------------------- pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve) pve-manager: 6.4-15 (running version...
  7. J

    ceph-osd consuming 100% of ram

    My ceph cluster lost one node and the rest o the cluster does not get the osd UP. They start, allocate 100% of the node RAM and get killed by the S.O. we use proxmox 7.2 and ceph octopus ceph version 15.2.16 (a6b69e817d6c9e6f02d0a7ac3043ba9cdbda1bdf) octopus (stable) we have 80G on the osd...
  8. N

    3 node CEPH hyperconverged cluster fragmentation.

    Proxmox = 6.4-8 CEPH = 15.2.13 Nodes = 3 Network = 2x100G / node Disk = nvme Samsung PM-1733 MZWLJ3T8HBLS 4TB nvme Samsung PM-1733 MZWLJ1T9HBJR 2TB CPU = EPYC 7252 CEPH pools = 2 separate pools for each disk type and each disk spliced in 2 OSD's Replica = 3 VM don't do many...
  9. F

    [SOLVED] Ceph issue after replacing OSD

    Hi, I have a 3 node cluster running Proxmox 6.4-8 with Ceph. 2 of the 3 nodes have 1.2TB for Ceph (each node has one 1.2TB disk for OSD, one 1.2TB disk for DB), the third node has the same configuration but with 900GB disks. I decided to stop, out and destroy the 900GB OSD to replace with 1.2TB...
  10. F

    [SOLVED] Create OSD using GPT partitions

    Hi, I'm trying to create Bluestore-backed OSDs using partitions not complete disk. Assuming that /dev/sda4 will be future OSD (placed on HDD drive), I created separate partition on SSD for DB/WAL. First try: # pveceph osd create /dev/sda4 -db_dev /dev/sdc1 -db_size 54 unable to get device info...
  11. B

    CEPH cluster planning

    Hi people! I'm planning a CEPH cluster which will go in production at some point but first will serve as a testing setup. We need 125TB usable storage initially, with a cap of about 2PB. The cluster will feed 10 intensive users initially, up to 100 later on. The loads are generally read heavy...
  12. grin

    [pve6] ceph lumi to nautilus guide problem: Required devices (block and data) not present for bluest

    # ceph-volume simple scan stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument Running...
  13. S

    Proxmox Ceph OSD Partition Created With Only 10GB

    How do you define the Ceph OSD Disk Partition Size? It always creates with only 10 GB usable space. Disk size = 3.9 TB Partition size = 3.7 TB Using *ceph-disk prepare* and *ceph-disk activate* (See below) OSD created but only with 10 GB, not 3.7 TB Commands Used root@proxmox:~#...
  14. G

    PVE WEB GUI commnucation failure(0) when list CEPH Storage

    Hi there... I have 2 PVE nodes and 5 servers as CEPH Storage, also building under PVE Servers. So I have two cluster: 1 cluster with 2 PVE nodes, named PROXMOX01 and PROXMOX02. * PROXMOX01 runs proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve) pve-manager: 5.3-11 (running version...
  15. L

    [SOLVED] Ceph missing more that 50% of storage capacity

    I have 3 nodes with 2 x 1TB HDD and 2 x 256G SSD's each. I have the following configuration: 1 SSD is used as system drive (LVM partitioned so bout a third is used for the system partition and the rest is used in 2 partitions for the 2 x HDD's WALs. The 2 x HDD are in a pool (the default...
  16. P

    Replace the drives in zfsraid1

    Hi guys I have a weird situtation tought you could help me I have a cluster of three nodes running proxmox + ceph. Ive installed the os (+ceph) on 2 x usb drive as zfs raid1, now I have high I/O wait on the CPU because the usbs are slow. I added 2 x SAS 15K and I’m thinking if its possible to...
  17. D

    Question about Ceph Bluestore Inline compression setup

    Hello folks! I've been working on a Ceph cluster for a few months now, and finally getting it to a point where we can put it into production. We're looking at possibly using an all flash storage system and I'd like to play around with using the inline compression feature with Bluestore. Now...
  18. I

    Using partition for OSD and WAL/DB

    Hello! Trying to set up brand new proxmox/ceph cluster. Got few problems: 1. Would it make sense to use SSD for wal/db? All osds are using HDDs, so I believe I'd benefit from using SSD for that. 2. Is it possible to use a partition instead of the whole drive to use for OSD while using a...
  19. N

    [Ceph] unable to run OSDs

    My apologies in advance for the length of this post! During a new hardware install, our Ceph node/server is: Dell PowerEdge R7415: 1x AMD EPYC 7251 8-Core Processor 128GB RAM HBA330 disk controller (LSI/Broadcom SAS3008, running FW 15.17.09.06 in IT mode) 4x Toshiba THNSF8200CCS 200GB...
  20. S

    Ceph low performance (especially 4k)

    Hello, We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using. So is there any way we can improve with configuration changes...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!