ceph

  1. migration path from ZFS via iSCSI to ceph?

    When it comes to storage, we have been using ZFS over iSCSI in our clusters for years. Now for a couple of new projects, we require S3 compatible storage and I am unsure about the best way to handle this situation. I am tempted to use MinIO, but I've read mixed reviews about it and Ceph seems...
  2. Proxmox backup hangs pruning older backups

    Hello, I have a problem when backing up to a ceph cluster of spinning disks. I have a cluster of 27 server-class nodes with 60 OSDs on a 10gig network. If I backup ~10 VM/CTs it works fine. Upping that number to ~20 the backup grinds to a halt (write bandwidth in the KB/s range) but...
  3. Proxmox ceph monitoring via zabbix agent

    Hi guys! At the moment we trying to configure monitoring by manual But now we got error Cannot fetch data: Post "https://admin:***@172.16.133.200:6856/request?wait=1": tls: first record does not look like a TLS handshake. We trying get information from ceph-manager by 6856 and 6857 ports, but...
  4. lvm after multipathing

    Hi there, pve 7.2-11, 3 node cluster, CEPH. Getting the problem that lvm starts before multipathing. Logical volumes are inactive after rebooting. Any idea?
  5. Ceph SATA SSD recommendations?

    I've been looking for some decent SATA SSD's for ceph. I don't even need alot of space, A pile of a a few 120GB or 250GB would be more than enough to move my IO intensive loads onto. However, I really have no idea what to buy. I've seen a few people say this micron ssd or that one, but I can...
  6. Disk Move issue from NFS

    Hi, Recently I have migrated a disk from Azure to Proxmox. I use a NFS share to easily download the disk to Proxmox directly and then mount it to a newly created machine. Which works fine! After testing everything I wanted to move the disk to ceph. But this does not seem to work. Even migrating...
  7. How to fence off ceph monitor processes?

    I the continuous process of learning about running an pmx environment with ceph, I came across a note regarding ceph performance: "... if running in shared environments, fence off monitor processes." Can someone explain what is meant by this and how does one achieve this? thanks!
  8. ceph filesystem stuck in read only

    Hi, i'm looking for some help/ideas/advices in order to solve the problem that occurs on my metadata server after the server reboot. "Ceph status" warns about my MDS being "read only" but the fileystem and the data seem healthy. It is still possible to access the content of my cephfs volumes...
  9. All NVME Cluster with Ceph

    I am working with a prospect to build out a new Proxmox cluster and looking at using Ceph with all NVME drives. Is there guidance/best practices on this sort of setup, in terms of how much ram to plan for Ceph overhead per node, Ceph setup with all NVME, Ceph setup for EC vs replica's, etc...
  10. Linked clone support with Ceph?

    Are there any issues with creating a linked clone to a Ceph RBD share, or is there a faster way to do this? I’m trying to find the best way to share “live” data between multiple PVE hosts, with a high transfer rate. The VM images that I want to share are in qcow2 format.
  11. Ceph uses false osd_mclock_max_capacity_iops_ssd value

    Hello everyone, I recently installed Ceph into my 3 node cluster, which worked out great at first. But after a while I noticed that the Ceph Pool would sometimes hang and stutter. Thats when I looked into the Configuration and saw this: I use 3 exactly the same SSDs, checked if every node uses...
  12. ceph 3 node cluster disaster mode?

    I am trying to find out that if it is possible to run a 3 node cluster with Ceph storage down to a single server? Yes, I know it is not ideal. I do have UPS (2 hours) and a generator. But in the event I need to trim the cluster down to a single server by migrating all VM/CT's to a single...
  13. ceph high latency after Proxmox7.2 update

    Hi Everybody We have a 5 node proxmoxcluster with 800+ containers. We finally were able to upgrade from proxmox6 to proxmox7.2 this week After the upgrade all 40 Ceph OSDs went from 1-2 ms latency to 30 ms and up The latency went up when we bootet the servers into proxmox7.2 and ceph15...
  14. VM storage traffic on Ceph

    Hello, I think I have misunderstood how some of the different networks function within Proxmox. I have a cluster of 9 nodes. Each node has two network cards; a 40Gbit/s dedicated for ceph storage, and a 10Gbit/s for all other networking (management/corosync, user traffic). I had assumed...
  15. [SOLVED] cephadm: No module named 'remoto' after PVE update

    Hello, I recently updated all three of our PVE hosts, that are also in a ceph cluster. During the update Ceph was updated to 16.2.9, however now I'm getting 3 warning for the ceph cluster. Module 'cephadm' has failed dependency: loading remoto library:No module named 'remoto' 3 stray host(s)...
  16. Ceph Quincy "Daemons have recently crashed" after node reboots

    Wondering if anyone else has observed this, or if I missed a memo on how to fix it (or maybe I'm doing something wrong!) Since updating my homelab and office production server clusters to Ceph Quincy earlier this year, we get "Daemons have recently crashed" errors after doing routine cluster...
  17. Proxmox & Ceph on multi-node chassis

    Hi all, I'm currently running a 3-node Proxmox Ceph cluster. Everything is hyper-converged, meaning all nodes act as both storage and compute nodes. This works really well and I've come to really like and appreciate Ceph (as well as Proxmox!) While 3 nodes are the bare minimum, I'd like to...
  18. Ceph: pg 1.0 inactive for days, slow ops

    There seems to be a problem with pg 1.0 and my understanding of placement groups and pools and OSDs. Yesterday, I removed osd.0 in an attempt to get the contents of pg 1.0 moved to another osd. But today it was stuck inactive for 24 hours, so my attempt resulted in resetting the inactive state...
  19. [SOLVED] Mount CephFS as directory?

    Hello everyone, I am in a weird situation with one of the clusters I administrate regarding storage configuration. I used to have an NFS storage with shared files among all the PVEs in the cluster. As I decided to migrate to Ceph I just installed it, configured everything and created a CephFS...
  20. Ceph disk planning OSD journal

    I have 3 node (planning 3-node pve cluster with ceph) 2x12c/128Gb ram/2x240Gb ssd sata/6x1.2TB sas 10k/HBA mode raid controller HPE/2x10G eth/4x1G eth and nfs-storage node 2x12c/64Gb ram/2x240gb ssd sata/10x6TB SATA 7200 Enterprise/HBA mode raid controller HPE/2x10G eth/4x1G eth I plan deploy...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!