ceph

  1. M

    Replace 3TB drive with 1 TB drives (Ceph)

    My ceph cluster has 3 3TB and 3 1TB drives with SSD WAL and DB. The write speeds are kinda meh on my VMs, from what I understand the 3 TB drives will get 3x the write request of the 1TB drives. Is my understanding correct ? and would it be better if I swapped my 3TB with 1TB drives making it 2...
  2. S

    Upgrading Proxmox VE without Ceph-cluster

    Hi. I have a question regarding the upgrade process for Proxmox VE in combination with Ceph. Currently, my Proxmox VE setup is running version 7, and I also have Ceph installed with version 15.2.17 (Octopus). I am planning to upgrade Proxmox VE to version 8, as per the official upgrade...
  3. F

    PVE can't delete images in CEPH Pool

    Hello, I have a ceph pool "SSD_POOL" and I can't delete unused images inside it. Has anyone gone through something similar? I'm trying to remove, for example, the vm-103-disk-0 image
  4. N

    cloudinit disks not cleaned up

    Hi We are deploying cloudinit images from terraform (telmate/proxmox), by cloning a template in proxmox already configured with cloud init. The new machine gets created with the next available VMID, and a small 4MBdisk is created to feed the cloudinit settings. The disk created for cloudinit is...
  5. V

    Ceph: hot refitting a disk

    Hi all, we needed to replace a drive caddy (long story) for a running drive, on a proxmox cluster running ceph (15.2.17). THe drives themselves are hot swappable. First i stopped the OSD, pulled out the drive, changed the caddy, refitted the (same) drive. The drive quickly showed up in proxmox...
  6. R

    OSD reweight

    Hello, maybe often diskussed but also question from me too: since we have our ceph cluster we can see an unweighted usage of all osd's. 4 nodes with 7x1TB SSDs (1HE, no space left) 3 nodes with 8X1TB SSDs (2HE, some space left) = 52 SSDs pve 7.2-11 all ceph-nodes showing us the same like...
  7. P

    VM migration speed question

    Hi collegues, i would like to ask you about migration speed between PVE cluster nodes. I have a 3-node PVE 8 cluster with 2x40G network links: one for CEPH cluster (1) and another one for PVE cluster/CEPH public network (2). CEPH OSDs is all-nvme. In cluster options i've set also one of these...
  8. W

    Virtual Machines and Container extremely slow

    Dear Proxmoy-Experts, Since some days now, the performance of every machine and container in my cluster is extremely slow. Here some general Info of my setup I am running a 3 node proxmox-cluster with up-to-date packages. All three cluster nodes are almost identical in heir hardware specs...
  9. t.lamprecht

    Ceph 18.2 Reef Available and Ceph 16.2 Pacific soon to be EOL

    Hi Community! The recently released Ceph 18.2 Reef is now available on all Proxmox Ceph repositories to install or upgrade. Upgrades from Quincy to Reef: You can find the upgrade how to here: https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef New Installation of Reef: Use the updated ceph...
  10. herzkerl

    Ceph OSD block.db on NVMe / Sizing recommendations and usage

    Dear community, the HDD pool on our 3 node Ceph cluster was quite slow, so we recreated the OSDs with block.db on NVMe drives (Enterprise, Samsung PM983/PM9A3). The sizing recommendations in the Ceph documentation recommend 4% to 6% of 'block' size: block.db is either 3.43% or around 6%...
  11. T

    HA Fencing bei Update einer anderen Node

    Guten Morgen zusammen, wir hatten letzte Woche einen sehr seltsamen Fall. Wir betreiben seit mehreren Jahren ein 10 Node PVE Cluster (inkl. CEPH) und hatten bis jetzt noch nie nennenswerte Probleme. Das Cluster läuft extrem stabil und wir sind sehr zufrieden. Aber: Wir haben letzte Woche das...
  12. B

    Advice on increasing ceph replicas

    Hi, I am after some advice on the best way to expand our ceph pool. Some steps have already been undertaken, but I need to pause until I understand what to do next. Initially we had a proxmox ceph cluster with 4 nodes each with 4 x 1TB SSD OSD. I have since added a 5th node with 6 x 1TB SSD...
  13. R

    Ceph RBD image encryption

    Hi There!, Has anyone used or had the experience of activating Ceph's RBD image encryption? RBD Image encryption What I want is to have encrypted disks of some VMs. OSD encryption doesn't solve this case, as it doesn't protect against an attacker gaining access to the host. I also had a look...
  14. I

    [SOLVED] Issue ceph monitoring - after upgrade to latest 7.4 (before upgrading to 8)

    I am trying to upgrade to proxmox 8. after finish updating all nodes to 7.4-16, (and rebooted each node after install) and updating ceph from Pacific to Quincy i just noticed that in the ceph Performance tab i dont see traffic (i usually have around 300-6000MBS) with 1000+ IOPS systems are...
  15. B

    I need a new cluster strategy - cloud computing

    Hello, I am a long-time proxmox user. We have purchased the following hardware for a new project and are about to launch a cloud computing platform at the entry stage. But I still haven't clarified the installation strategy and scenario. Hardwares: 5 x Dell PowerEdge R630 1 x Dell Unity 600F...
  16. E

    rbd map fails

    Hi There We try to use the ceph cluster for persisten storage in our local okd installation with Rook. The Operator creates the block storage correct in the ceph pool. But pods and also local clients are not able to map the storage. rbd ls --id=admin -m...
  17. B

    [SOLVED] qm remote-migrate with shared storage

    Hi, I have a use case where we have fairly large PVE 7 cluster connected to an external ceph cluster. We would like to set up a second PVE cluster in the same physical location due to the current cluster is now pushing 36 hosts. The new cluster will be connected to the same external ceph...
  18. R

    CEPH in PVE 7.3 can not working with RDMA/RoCE?

    Hi, PVE geekers: I build a pve cluster on three server (with ceph), with pve & ceph package version as follow: root@node01:~# pveversion pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve) root@node01:~# ceph --version ceph version 16.2.13 (b81a1d7f978c8d41cf452da7af14e190542d2ee2)...
  19. P

    What happens to the data from a failed volume move to ceph?

    Hi there, I am playing with Ceph and a three node cluster for learning. I have a 4TB turnkey filestore container using ZFS storage on one node. I have been moving its volume into a new Ceph pool. This move failed the first couple of times, partly due to me not providing enough space and partly...
  20. L

    [SOLVED] Ceph snaptrim causing perforamnce impact on whole Cluster since update

    Hi, I upgraded a Cluster right all the way from Proxmox 6.2/Ceph 14.x to Proxmox 8.0/Ceph 17.x (latest). Hardware is Epyc Servers, all flash / NVME. I can rule out Hardware issues. I can reproduce the issue as well. All running fine so far, except that my whole system gehts slowed down when i...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!