ceph

  1. [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  2. Rados Gateway and Octopus+Proxmox

    I used to use the procedure listed here: https://pve.proxmox.com/wiki/User:Grin/Ceph_Object_Gateway but this doesnt appear to work with Octopus. @grin or devs- can you point me at whats missing :)
  3. New to Ceph; looking for suggestions for a 3-node cluster

    Hey all; I've made use of Proxmox for a while and I'm looking to take my first few steps into ceph -- I have 3x QCT QuantaGrid SD1Q-1ULH that I will be making use of. Each of these nodes have a Xeon D-1541 with an OCP Mezz 10Gb SFP+ (QCT Intel® 82599ES dual-port 10G SFP+ OCP mezzanine) and...
  4. Ceph Erasure Coding support

    I've read posts about Proxmox not supporting Ceph EC. Last one was dated on May last year. I'd like to ask if, almost one year later, this is still the stance of Proxmox or we can expect EC support in some near future (2021). I'm considering setting up a separate Ceph cluster (storage only...
  5. new 4 node proxmox ve 6.3 ceph cluster config

    I have 4 node of identical spec. HP DL20 Xeon E2278G 64G RAM 6 of PM883 480G SSD(Ceph OSD) 1 of Intel DC3700 800G PCI-e NVMe Card(Proxmox itself) 1 of Mellanox Connectx 3 Dual 40G card (FW. 2_36_5000 HP OEM, Ethernet Mode) Internal network config also identical. iLO : 192.168.88.201~204...
  6. 1 Node Proxmox trigger alarm Health Warning Status "clock skew detected on mon.host"

    Hi Proxmox. Fyi, one (1) of our Proxmox Node (Host) trigger alarm Health Warning Status "clock skew detected on mon.host" on CEPH today starting on (26 mac 2021 6.27AM). Fyi, there are no changes in configuration but suddenly it trigger alarm "clock skew detected on mon.host". Kindly advise me...
  7. Ceph Cluster performance

    Hi All, I've a ceph Cluster with 3 nodes HPE each with 10xSAS 1TB and 2xnvme 1TB below the config. The replica and ceph network is 10Gb but the performance are very low... in VM I got (in sequential mode) Read: 230MBps Write: 65MBps What I can do/check to tune my storage environment? # begin...
  8. Live migration with ceph sometimes fails

    Hi everyone, I'm using latest proxmox (6.3) and experiencing strange issue during live migration of KVM machines running on ceph block storage (ceph cluster was created through proxmox). Cluster is running fine for several years (was prevously on proxmox 5). This issue started only lately, I...
  9. [SOLVED] OSD Crash after upgrade from 5.4 to 6.3

    Hi to all We have recently upgraded our cluster from the version 5.4 to version 6.3. Our cluster is composed of 6 ceph node and 5 hypervisor. All server have the same packages versions. Here's the details: pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve) pve-manager: 6.3-4...
  10. Why mon low disk space ?

    Hi, I'm again in a "mon low disk space" on my small proxmox cluster. I saw many threads about this warning but I'm unable to manage the problem. average osd use is 5% only / partition (where /var/lib/ceph is located) is 72% (19GB available) and this seems to be a problem for Ceph # df -h...
  11. ZFS and Ceph on same cluster

    Hi All just wondering is it possible to have a ProxMox cluster with both local ZFS data store per host + Ceph on the same cluster? example. 5 or 6 hosts 10 bays per host 4 bays for ZFS mirror 6 bays for Ceph is it possible to have this mixed storage design in a single cluster? From a...
  12. [SOLVED] CEPH separate public/cluster network

    Hi, we have set up a 3-node Proxmox VE/CEPH Cluster with one network/VLAN/IP-range for both public and cluster network. Now we decided to separate public and cluster network into 2 VLANs/IP ranges. 1. Is there a way to do this in an easy way? (we don't have workload, yet - so service...
  13. [SOLVED] PGs not being deep scrubbed in time; After replacing disks

    This week we have been balancing storage across our 5 node cluster; Everything is going relatively smoothly but am getting a warning in CEPH: "pgs not being deep-scrubbed in time" This only began happening AFTER we made changes to the disks on one of our nodes; CEPH is still healing properly...
  14. [SOLVED] Questions about Proxmox Ceph Cluster

    Hello, I'm looking into a solution for local storage Live Migration. As it seems Proxmox doesn't support "XCP-like" local storage migrations, it seems that I need to build a Ceph cluster. Let's consider a basic 3 machine cluster (PX1, PX2 and PX3) with 1 Gbps connectivity between them via a...
  15. [SOLVED] How to re-sync Ceph after HW failure ?

    Hi, After a HW failure in a 3 nodes Proxmox VE 6.3 cluster, I replaced the HW, and re-joined the new node. The replaced node is called hystou1, the 2 other nodes are hystou2 and 3. I had a couple of minor issues when re-joining the new node since it has the same name, and I had to remove...
  16. Native CEPH access from non-Linux OSes

    I wanted to know what was the status of non-Linux OSes support for direct access to native I/O paths of Ceph with RADOS. I am trying to evaluate which existing program would allow a direct CEPH cluster access in order to avoid using iSCSI or CIFS gateways. Idea is to use such programs to...
  17. Proxmox Ceph Pools

    Hello, While I understand that this is not really a Proxmox question, I've found no other place to ask this into so here we go: I've installed Proxmox with 3 nodes, each having additional disks installed in order to form a Ceph cluster. I did so, having 3 osds (1 on each host) and creating two...
  18. Backup ceph-fs?

    Hi, I’ve started to use pbs for VMs and containers backups, but I don’t find a way to backup Ceph file systems... I’ve created ceph-fs in the proxmox cluster. Is there any proper way to do it? If not, is there any plan for proxmox or pbs next releases to support this feature? Thanks!
  19. [SOLVED] Ceph: HEALH_WARN never ends after osd out

    Hello, I'm trying to replace HDD to SSD. As my understanding, I let a target osd out and wait to become HEALTH_OK and destroy it to remove the current HDD physically. but after the osd out operation , HEALTH_WARN never ends. How can I fix it? My version is Virtual Environment 5.4-15 Satoshi
  20. [SOLVED] rbd error: rbd: listing images failed: (2) No such file or directory (500)

    Hi I'm seeing this error randomly when trying to attempt (usually live migration works for the same VM after 1-2 attempts) and consistently when I'm trying to show the content of second Ceph pool. I have 2 Ceph pools, within Proxmox, one has a replication rule where it replicated to SSDs only...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!