ceph

  1. misplaced objects after removing OSD

    Hi! We have 3 identical servers running promox+ceph with 2 HDDs per server as OSDs: - OS: debian Buster - proxmox version 6.4-1 - ceph version 14.2.22-pve1 (nautilus) One OSD went down so we decided to remove it following the ceph documentation here. Now we have 5 OSD left: $ sudo ceph osd...
  2. Proxmox three node cluster - ceph - got timeout

    Hello, I have three node proxmox cluster: optiplex 7020 xeon e3-1265lv3 16GB 120GB SSD for OS 512GB nvme for ceph 1GbE network for "external" access dual 10GbE network (for cluster) Network is connected as stated here: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server as routed...
  3. CephFS vs VirtIO SCSI Write IOPS

    Hi, I've been testing our Proxmox Ceph cluster and have noticed something interesting. I've been running fio benchmarks against a CephFS mount and within a VM using VirtIO SCSI. CephFS on /mnt/pve/cephfs - root@pve03:/mnt/pve/cephfs# fio --name=random-write --ioengine=posixaio --rw=randwrite...
  4. Add new OSD to existing CEPH POOL

    Hi all, i've 4 nodes proxmox, with CEPH, only three are monitors. For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the...
  5. Mixed ZFS and CEPH Cluster

    Hello, i have a 21 Nodes Cluster, separate ZFS FS. Can i add 2 or 3 more nodes configured with Ceph FS ? Could there be any drawbacks? Thanks!
  6. Advice for new Hyper converged platform

    Hello All, We are using for many years from XEN to XCP-NG , and recently i stumble in few article and recommendation for Proxmox, in addition looking on the product itself looks so comprehensive in term of features and manageability , especially the Hyper converged options and the fact that...
  7. Ceph-Pool schrumpft schnell nach Erweiterung mit OSDs (wahrscheinlich morgen Cluster-Ausfall)

    Hallo zusammen, nachdem ich einen SSD-Pool zu meinem bestehenden HDD-Pool hinzugefügt habe schrumpft der HDD-Pool extrem schnell, so dass vermutlich morgen ein Produktionsausfall bevorsteht. ursprüngliche Umgebung: 3-Node-Hyper-Converged-Cluster, (PVE Vers. 6.3-6) mit verteiltem Ceph (Vers...
  8. optimal ceph configuration for a 4 server 2 location HA setup

    Hi All, We have 2 datacenters, one located in each building. We are setting up a new proxmox VE HA cluster of 4 machines. The idea being if one building goes down for an extended time, the other 2 servers will be able to keep everything up. In this setup each server has 8 ssd's. one ssd...
  9. OVH and Proxmox

    We are looking to add some services in Europe for clients and have been scouting for suitable hosting space. OVH seems to be one of the only options that offer Proxmox hosting. However, from their many options to select from, we can't quite figure out which is the most suitable one. We would...
  10. Problem with mds

    Hi all, I have a cluster with 3 nodes (pve, pve1, pve2) Here the version information: root@pve:~# pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve) pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) pve-kernel-5.4: 6.4-11 pve-kernel-helper: 6.4-11 pve-kernel-5.3: 6.1-6...
  11. Ceph Pool Reports Usage But is Empty

    I have recently migrated all VMs on a PVE cluster from a hdd pool to a ssd pool. Now that the hdd pool is empty (no VMs), ceph still reports 31% usage on the pool. This ceph has been in use for a while now. Upgraded from ceph 12 to 14 to 15. This is the second cluster where I noticed large...
  12. Accidentally deleted /var/log, cluster having massive issues

    Hi there, I was trying to do routine clearing out of /var/log files, and typo'd my rm command and deleted everything inside /var/log. Right now I have tried to recreate the directory structure and files, as well as fixed permissions. However, my servers are still turning to the grey question...
  13. Not able to read from Ceph

    I have just converted a 2 node cluster to use the ceph file store, the migration seemed to go well with writing all the data onto ceph. When it came time to fire up the VM's I noticed that they just sit there trying to start. I created some backups on the cephfs (which worked fine) then I tried...
  14. Ceph new rule for HDD storage.

    Hi guys, During the free time I had to think how to extend and add new resources (storage) to our Cloud. For the moment I do have stroage base on Ceph - OSD only SSD type. I was reading docs about Ceph and I can say it's possbile I have even the plan. Problem is I have no idea if the actions I...
  15. ceph pool compression lz4

    Hi, to save some space on our ssd-pool i've enabled compression on the pool: ceph osd pool set VMS compression_algorithm lz4 ceph osd pool set VMS compression_mode aggressive with: ceph df detail, i can get some details but cannot verify if it works. Any hints? Is "ratio" needed as well...
  16. Ceph vs ZFS - Which is "best"?

    TLDR: Ceph vs ZFS: advantages and disadvantages? Looking for thoughts on implementing a shared filesystem in a cluster with 3 nodes. They all serve a mix of websites for clients that should be served with minimal downtime, and some infrastructure nodes such as Ansible and pfSense, as well as...
  17. Ceph CRUSH Map Reset nach Neustart

    Hallo zusammen, ich nutze PVE seit über 5 Jahren zu Hause für etliche VMs, bisher mit ZFS. Als Host läuft ein HP DL380P Gen8 mit 56GB RAM. Das funktioniert soweit einwandfrei. Nun zu meinem Vorhaben: Ich habe mich nun etwas in Ceph eingelesen und wollte mal testen ob das als Storage Pool bei...
  18. Ceph Hilfestellung

    Hi zusammen, ich betreibe aktuell einen 3-node ceph cluster den ich nun mit enterprise SSDs aufrüsten möchte um ggfs journal (db/wal) drauf zu parken. erstmal die specs der 3 nodes. node1: Xeon E5-2630L v3 3x6TB HDD (folgt noch eine 6TB sowie 3TB HDD) 1xM.2 256GB (ceph pool für VMs) 1x120GB...
  19. [SOLVED] Adding New node Cluster - Ceph got timeout(500)

    Hello guys, After adding new node to cluster - all succesfull, I run on some problem when installing and managing Ceph on new node from cluster. This is production Cluster. The new node is not, monitor, manager and none of the drives was used as OSD. Beacuse when I try to reach interface of...
  20. Ceph latency remediation?

    Hi all, I'm having what seems to be a network nottleneck. Context is: one of my clients wants to revamp it's infrastructure and was happy already with PVE servers despite having only local zfs backed images, missing out on the broad possibilities offered by Ceph... I wanted to push him to go...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!