ceph bluestore

  1. A

    VM not able to migrate in realtime

    Hello Team, We have a simple setup of 3 nodes running proxmox 8.1.4 with under ceph quency 17.2.7 as shared storage. We try to create a HA for our vm . So in case of any node goes down our VM can be live migrated to another node . We try to test this scenario and we intentionally reboot one...
  2. M

    ceph tuning

    First a disclaimer, this is a lab, definitely not a reference design, the point was to do weird things, learn how cephs reacts and then learn how to get myself out of whatever weird scenario I ended up with. I've spent a few days on the forum, seems many of the resolutions were people...
  3. A

    Error for add new OSD on ceph

    I am trying to add new osd without success. Follow the logs and information --------------------------------------------------------------------------------------------------- # pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve) pve-manager: 6.4-15 (running version...
  4. A

    Problem with add new osd in ceph

    Hi, I'm having trouble adding a new osd to an existing ceph. ----------------------------------------------------------------------------------------------------------------- pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve) pve-manager: 6.4-15 (running version...
  5. J

    ceph-osd consuming 100% of ram

    My ceph cluster lost one node and the rest o the cluster does not get the osd UP. They start, allocate 100% of the node RAM and get killed by the S.O. we use proxmox 7.2 and ceph octopus ceph version 15.2.16 (a6b69e817d6c9e6f02d0a7ac3043ba9cdbda1bdf) octopus (stable) we have 80G on the osd...
  6. N

    3 node CEPH hyperconverged cluster fragmentation.

    Proxmox = 6.4-8 CEPH = 15.2.13 Nodes = 3 Network = 2x100G / node Disk = nvme Samsung PM-1733 MZWLJ3T8HBLS 4TB nvme Samsung PM-1733 MZWLJ1T9HBJR 2TB CPU = EPYC 7252 CEPH pools = 2 separate pools for each disk type and each disk spliced in 2 OSD's Replica = 3 VM don't do many...
  7. F

    [SOLVED] Ceph issue after replacing OSD

    Hi, I have a 3 node cluster running Proxmox 6.4-8 with Ceph. 2 of the 3 nodes have 1.2TB for Ceph (each node has one 1.2TB disk for OSD, one 1.2TB disk for DB), the third node has the same configuration but with 900GB disks. I decided to stop, out and destroy the 900GB OSD to replace with 1.2TB...
  8. F

    [SOLVED] Create OSD using GPT partitions

    Hi, I'm trying to create Bluestore-backed OSDs using partitions not complete disk. Assuming that /dev/sda4 will be future OSD (placed on HDD drive), I created separate partition on SSD for DB/WAL. First try: # pveceph osd create /dev/sda4 -db_dev /dev/sdc1 -db_size 54 unable to get device info...
  9. B

    CEPH cluster planning

    Hi people! I'm planning a CEPH cluster which will go in production at some point but first will serve as a testing setup. We need 125TB usable storage initially, with a cap of about 2PB. The cluster will feed 10 intensive users initially, up to 100 later on. The loads are generally read heavy...
  10. grin

    [pve6] ceph lumi to nautilus guide problem: Required devices (block and data) not present for bluest

    # ceph-volume simple scan stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument Running...
  11. S

    Proxmox Ceph OSD Partition Created With Only 10GB

    How do you define the Ceph OSD Disk Partition Size? It always creates with only 10 GB usable space. Disk size = 3.9 TB Partition size = 3.7 TB Using *ceph-disk prepare* and *ceph-disk activate* (See below) OSD created but only with 10 GB, not 3.7 TB Commands Used root@proxmox:~#...
  12. G

    PVE WEB GUI commnucation failure(0) when list CEPH Storage

    Hi there... I have 2 PVE nodes and 5 servers as CEPH Storage, also building under PVE Servers. So I have two cluster: 1 cluster with 2 PVE nodes, named PROXMOX01 and PROXMOX02. * PROXMOX01 runs proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve) pve-manager: 5.3-11 (running version...
  13. L

    [SOLVED] Ceph missing more that 50% of storage capacity

    I have 3 nodes with 2 x 1TB HDD and 2 x 256G SSD's each. I have the following configuration: 1 SSD is used as system drive (LVM partitioned so bout a third is used for the system partition and the rest is used in 2 partitions for the 2 x HDD's WALs. The 2 x HDD are in a pool (the default...
  14. P

    Replace the drives in zfsraid1

    Hi guys I have a weird situtation tought you could help me I have a cluster of three nodes running proxmox + ceph. Ive installed the os (+ceph) on 2 x usb drive as zfs raid1, now I have high I/O wait on the CPU because the usbs are slow. I added 2 x SAS 15K and I’m thinking if its possible to...
  15. D

    Question about Ceph Bluestore Inline compression setup

    Hello folks! I've been working on a Ceph cluster for a few months now, and finally getting it to a point where we can put it into production. We're looking at possibly using an all flash storage system and I'd like to play around with using the inline compression feature with Bluestore. Now...
  16. I

    Using partition for OSD and WAL/DB

    Hello! Trying to set up brand new proxmox/ceph cluster. Got few problems: 1. Would it make sense to use SSD for wal/db? All osds are using HDDs, so I believe I'd benefit from using SSD for that. 2. Is it possible to use a partition instead of the whole drive to use for OSD while using a...
  17. N

    [Ceph] unable to run OSDs

    My apologies in advance for the length of this post! During a new hardware install, our Ceph node/server is: Dell PowerEdge R7415: 1x AMD EPYC 7251 8-Core Processor 128GB RAM HBA330 disk controller (LSI/Broadcom SAS3008, running FW 15.17.09.06 in IT mode) 4x Toshiba THNSF8200CCS 200GB...
  18. S

    Ceph low performance (especially 4k)

    Hello, We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using. So is there any way we can improve with configuration changes...
  19. H

    small 3 node ceph blustore: how use NVME? config recommendations?

    Hi folks, given are 3 nodes: each node 10 GB network each node 8 enterprise spinners 4TB each node 1 enterprise nvme 1TB each node 64 GB RAM each node 4 Core cpu -> 8 threads up to 3.2 GHz pveperf of cpu: CPU BOGOMIPS: 47999.28 REGEX/SECOND: 2721240 each node latest proxmox of course...
  20. S

    ceph bluestore much slower than glusterfs

    Hello, I just want to create brand new proxmox cluster. On some older cluster I used glusterfs, now I have some time and I try to compare glusterfs vs new ceph (PVE 5.2). on my lab I have 3 VM (in nested env) with ssd storage. iperf show between 6 to 11 gbps, latency is about 0.1ms I make one...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!