ceph

  1. M

    IS IT POSSIBLE TO MODIFY vmbr0 BRIGED ON eth0 TO wlan0?

    Hey folks and hope you are all doing great, After enormous try and fail attempts; previous post; I have managed to make my 3 Raspberry CM4 boards orchestrating together in a Cluster via Proxmox 7. The issue that I am trying to tackle now, is how to modify the default bridge network (vmbr0)...
  2. powersupport

    for ceph, may i know what’s the latency requirement between nodes?

    We have a 5 nodes cluster and with atleast 4 disk in each node, may i know what’s the latency requirement between these nodes?
  3. P

    HA migration not working

    I'm having trouble migrating VMs when shutting down a node (failover doesn't work). It is a 3 node cluster (Dell 2x R710 y R510) with Proxmox 8.0.3 and ceph version 17.2.6 quincy. VM test to migrate: /etc/pve/101.conf: No such file or directory root@svr1:/etc/pve# cat...
  4. P

    How can all PVE nodes not restart when HA function is enabled and Ceph switch is turned off?

    Dear experts and professors, do you have any good solutions to solve this problem? this is my pve , HA is enable ,and some virtual PC is include in HA one day ,my ceph switch down , and then all of pve node restart after 。 This is a production environment ,we want in this case ,all of pve...
  5. P

    pveceph install kept switching to enterprise repo

    On a Proxmox 8.0.4 node, I have the following in /etc/apt/sources.list.d/ceph.list: deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription I executed apt update, then run pveceph install But I always received the following message: WARN: Enterprise repository selected...
  6. L

    cephfs doesnt mount on centos 8

    Hi ! Please help me understand what happen. I have pve 7 and ceph version 17.2.5. Created ceph fs and try to mount it on my centos 8 fresh installed server. But there is an error in cli: mount -t ceph 10.20.0.120:/cephfs /mnt/cephfs -o name=user_test,secret=%ANY-SECRET% mount: /mnt/cephfs...
  7. M

    Unable to start Ceph

    Hi, I just did a hard disk swap and all the osds on a node is not able to start with the service `systemctl start ceph-osd@0` the output of systemctl status ceph-osd@0 is ceph-osd@0.service - Ceph object storage daemon osd.0 Loaded: loaded (/lib/systemd/system/ceph-osd@.service...
  8. S

    CEPH storage use grows rapidly despite of space allocation

    I have configured 3 node proxmox with equal number of OSDs and storage. The storage use keeps increasing despite of the allocated space. The used percentage grows from 99 percent to 100 percent. Why is this happening?
  9. M

    Replace 3TB drive with 1 TB drives (Ceph)

    My ceph cluster has 3 3TB and 3 1TB drives with SSD WAL and DB. The write speeds are kinda meh on my VMs, from what I understand the 3 TB drives will get 3x the write request of the 1TB drives. Is my understanding correct ? and would it be better if I swapped my 3TB with 1TB drives making it 2...
  10. S

    Upgrading Proxmox VE without Ceph-cluster

    Hi. I have a question regarding the upgrade process for Proxmox VE in combination with Ceph. Currently, my Proxmox VE setup is running version 7, and I also have Ceph installed with version 15.2.17 (Octopus). I am planning to upgrade Proxmox VE to version 8, as per the official upgrade...
  11. F

    PVE can't delete images in CEPH Pool

    Hello, I have a ceph pool "SSD_POOL" and I can't delete unused images inside it. Has anyone gone through something similar? I'm trying to remove, for example, the vm-103-disk-0 image
  12. N

    cloudinit disks not cleaned up

    Hi We are deploying cloudinit images from terraform (telmate/proxmox), by cloning a template in proxmox already configured with cloud init. The new machine gets created with the next available VMID, and a small 4MBdisk is created to feed the cloudinit settings. The disk created for cloudinit is...
  13. V

    Ceph: hot refitting a disk

    Hi all, we needed to replace a drive caddy (long story) for a running drive, on a proxmox cluster running ceph (15.2.17). THe drives themselves are hot swappable. First i stopped the OSD, pulled out the drive, changed the caddy, refitted the (same) drive. The drive quickly showed up in proxmox...
  14. R

    OSD reweight

    Hello, maybe often diskussed but also question from me too: since we have our ceph cluster we can see an unweighted usage of all osd's. 4 nodes with 7x1TB SSDs (1HE, no space left) 3 nodes with 8X1TB SSDs (2HE, some space left) = 52 SSDs pve 7.2-11 all ceph-nodes showing us the same like...
  15. P

    VM migration speed question

    Hi collegues, i would like to ask you about migration speed between PVE cluster nodes. I have a 3-node PVE 8 cluster with 2x40G network links: one for CEPH cluster (1) and another one for PVE cluster/CEPH public network (2). CEPH OSDs is all-nvme. In cluster options i've set also one of these...
  16. W

    Virtual Machines and Container extremely slow

    Dear Proxmoy-Experts, Since some days now, the performance of every machine and container in my cluster is extremely slow. Here some general Info of my setup I am running a 3 node proxmox-cluster with up-to-date packages. All three cluster nodes are almost identical in heir hardware specs...
  17. t.lamprecht

    Ceph 18.2 Reef Available and Ceph 16.2 Pacific soon to be EOL

    Hi Community! The recently released Ceph 18.2 Reef is now available on all Proxmox Ceph repositories to install or upgrade. Upgrades from Quincy to Reef: You can find the upgrade how to here: https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef New Installation of Reef: Use the updated ceph...
  18. herzkerl

    Ceph OSD block.db on NVMe / Sizing recommendations and usage

    Dear community, the HDD pool on our 3 node Ceph cluster was quite slow, so we recreated the OSDs with block.db on NVMe drives (Enterprise, Samsung PM983/PM9A3). The sizing recommendations in the Ceph documentation recommend 4% to 6% of 'block' size: block.db is either 3.43% or around 6%...
  19. T

    HA Fencing bei Update einer anderen Node

    Guten Morgen zusammen, wir hatten letzte Woche einen sehr seltsamen Fall. Wir betreiben seit mehreren Jahren ein 10 Node PVE Cluster (inkl. CEPH) und hatten bis jetzt noch nie nennenswerte Probleme. Das Cluster läuft extrem stabil und wir sind sehr zufrieden. Aber: Wir haben letzte Woche das...
  20. B

    Advice on increasing ceph replicas

    Hi, I am after some advice on the best way to expand our ceph pool. Some steps have already been undertaken, but I need to pause until I understand what to do next. Initially we had a proxmox ceph cluster with 4 nodes each with 4 x 1TB SSD OSD. I have since added a 5th node with 6 x 1TB SSD...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!