ceph cluster

  1. jesusjimenez

    CEPH - IP address change

    Hi everyone, I would like to change IP addresses used for CEPH communication on my 3 node Proxmox cluster. Have you done anything like this before? Is it enough to change monitor addresses and public network range in /etc/ceph/ceph.conf and monhost in /etc/pve/storage.cfg? Thanks for any answer
  2. D

    Ceph Node failure

    Hi, I have 3 nodes Proxmox Cluster with Ceph. I'm happy with current setup and performance. Now we are planning Disaster Recovery. We are using separate NFS storage for VM backup. I have few questions and I need expert advice. Our Ceph pools are setup with 2 replica. We have 4 OSDs in each...
  3. C

    Possible to detect guest hangs in HA using Qemu Agent?

    Hi, In a 3 node CEPH/Proxmox 4 HA Cluster, I recently had a Windows 7 Guest VM hang (BSOD). As expected HA never kicked in because in proxmox' point of view the VM is up and running. I thought maybe the QEMU Guest Agent would help checking for hung VM's but when I checked the wiki page it only...
  4. Y

    Ceph - Monitor clock skew

    Hello, on my test Ceph cluster (4 nodes ) this morning i have this Warning "Monitor clock skew detected" root@n1:~# ceph health detail HEALTH_WARN clock skew detected on mon.1, mon.2; Monitor clock skew detected mon.1 addr 10.10.10.2:6789/0 clock skew 0.085488s > max 0.05s (latency...
  5. Y

    Ceph pool Pg num

    Hello, is it correct a 256 Pg num for a 8 OSD 3/2 cluster ? i have calculated: (8x100)/3 = 266 ~ 256 pg Question: if in the next month i will add more OSD and nodes, i will have to migrate on a new Pool with different Pg num? Thanks
  6. Y

    Replace node in Ceph Cluster

    Hello, i'm testing the replacement of a node in a ceph cluster. I have this problem: pveceph createmon monitor address '10.10.10.3:6789' already in use by 'mon.2' ( mon 2 was the failed node ) I have done: pveceph destroymon 2 monitor filesystem '/var/lib/ceph/mon/ceph-2' does not exist on...
  7. Y

    4 Nodes Ceph

    Hello, i'm testing a 4 nodes Ceph Cluster: Each node have two sata HD and two SSD for Journal. ----------------------------------------------------------------------------------- ceph -w cluster 1126f843-c89b-4a28-84cd-e89515b10ea2 health HEALTH_OK monmap e4: 4 mons at...
  8. M

    Storage Model, Ceph vs GlusterFS

    Hello all, 'Quick question. So, I know that Proxmox VE includes both Ceph, and GlusterFS support... however, I get the impression (and correct me if I am wrong on this) that Ceph is being pushed as the de-facto choice for HA/Clusters needing shared storage. Red Hat however seems to favor...
  9. G

    Ceph OSD Balancing

    I have a 3-node Ceph cluster configured as per the Proxmox wiki. Each node has 3 SSDs, as shown in the attached screenshot of the OSDs (2x 1TB and 1x 240 / 250GB). I'm seeing quite a difference in usage between the drives; for example, osd.3 is 44.50% consumed, whereas osd.7 is just 32.07%...
  10. A

    Strange percentages on pool ceph jewel

    Hello! Need help. Test System: proxmox 4.4, ceph + cephFS jewel. proxmox-ve: 4.4-84 (running kernel: 4.4.44-1-pve) pve-manager: 4.4-13 (running version: 4.4-13/7ea56165) pve-kernel-4.4.35-1-pve: 4.4.35-77 pve-kernel-4.4.44-1-pve: 4.4.44-84 lvm2: 2.02.116-pve3 corosync-pve: 2.4.2-2~pve4+1...
  11. R

    re: crushmap oops

    Little bit a ceph beginner here. I followed the directions from Sébastien Han and built out a ceph crushmap with HDD and SSD in the same box. There are 8 nodes each contributing an SSD and an HDD. I only noticed after putting some data on there I goofed and put a single HDD in the SSD group...
  12. Y

    Poll config on a 4 node Ceph

    Hello, i'm testing a 4 node ceph ( 8 OSD ). Each node have 2 HD Sata 4TB 2 SSH 150G ( dedicated to the ceph Journal ). Is it correct this Pool config ? Size 4 Min Size 3 Crush ruleset 0 pg_num: 200 ( i have userd: http://ceph.com/pgcalc/ ) Probably i haven't understund the Size parameter...
  13. S

    H.A. freezed situation

    Hi, I've set up a 3 nodes cluster with CEPH (1 disk/node) and High availability. I'm in the testing step now (not in prod) and I spend some time to broke the config and check how proxmox recovers. I fall in a freezed situation where: - I have a separate network for CEPH - my linux VM is running...
  14. S

    Unable to migrate a VM

    I have a cluster of 3 nodes proxmox VE 4.4-12/e71b7a74 with CEPH storage set up (one pool). I've created a Linux CentOS7 VM in the CEPH storage pool I've added a disk in this VM, located in the same CEPH storage pool, to store data with the "No Backup" flag because I do not want to backup these...
  15. R

    pveceph createosd makes "partitions" instead of "osd.x"

    Just setting up a new 8 node cluster. Each node offers two OSDs. Looking at this what I am experiencing is that I seem to be capped at 14 OSDs for the whole cluster. I was curious if this is just a change to Ceph.pm because I found this line: pg_bits => { description =>...
  16. R

    Is there a reason to limit the number of monitors? ( unable to find usable monitor id )

    I was trying to get some OSDs to appear and running into a problem getting a monitor running on each node in a hyper converged ceph environment with 8 nodes. I was looking for the error unable to find usable monitor id quite a bit and hitting some walls with my results so I looked at...
  17. D

    Proxmox CEPH Cluster's Performance

    Hi, I need your help. I’m getting very poor performance. I have 3 nodes Proxmox Cluster setup with HP DL580 g7 Server. Each server has dual port 10 Gbps NIC. Each node has 4 x 15K 600 2.5 SAS and 4 X 1 TB 7.2k SATA Each node has following Partitions ( I'm use Logical Volume as OSD): Node 1...
  18. G

    Understanding Ceph Failure Behavior

    We have a cluster of 3 Proxmox servers and use Ceph as underlying storage for the VMs. It's fantastic and hasn't given us any trouble so far. As our usage rises and available capacity dimishes, I'm starting to wonder what actually happens in the event of a failure. I'm not too worried about an...
  19. A

    Ceph monitors’ and OSDs’ daemons doesn’t come up.

    We have created two-nodes Proxmox v4.4 cluster with Ceph Hammer pool running on the same nodes. About 6 weeks it has been working as expected, but today we were facing continuous local blackout in our office, and both cluster nodes were powered off accidentally and unexpectedly. After this...
  20. G

    [SOLVED] CephFS ungleiche Datenverteilung

    Hallo zusammen, ich hoffe mir kann hier jemand bei CephFS weiter helfen: Wie oben geschrieben habe ich das Problem, dass in meinem Proxmox Cluster mit CephFS die Daten auf den OSDs sehr ungleich verteilt werden. Im folgenden das Setup: Aktuell 4 Server (5. ist in Planung). Jeder Server hat...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!