Hi everyone,
I would like to change IP addresses used for CEPH communication on my 3 node Proxmox cluster. Have you done anything like this before? Is it enough to change monitor addresses and public network range in /etc/ceph/ceph.conf and monhost in /etc/pve/storage.cfg?
Thanks for any answer
Hi,
I have 3 nodes Proxmox Cluster with Ceph. I'm happy with current setup and performance.
Now we are planning Disaster Recovery. We are using separate NFS storage for VM backup.
I have few questions and I need expert advice.
Our Ceph pools are setup with 2 replica. We have 4 OSDs in each...
Hi,
In a 3 node CEPH/Proxmox 4 HA Cluster, I recently had a Windows 7 Guest VM hang (BSOD).
As expected HA never kicked in because in proxmox' point of view the VM is up and running.
I thought maybe the QEMU Guest Agent would help checking for hung VM's but when I checked the wiki page it only...
Hello, on my test Ceph cluster (4 nodes ) this morning i have this Warning
"Monitor clock skew detected"
root@n1:~# ceph health detail
HEALTH_WARN clock skew detected on mon.1, mon.2; Monitor clock skew detected
mon.1 addr 10.10.10.2:6789/0 clock skew 0.085488s > max 0.05s (latency...
Hello, is it correct a 256 Pg num for a
8 OSD 3/2 cluster ?
i have calculated:
(8x100)/3 = 266 ~ 256 pg
Question:
if in the next month i will add more OSD and nodes, i will have to migrate on a new Pool with different Pg num?
Thanks
Hello,
i'm testing the replacement of a node in a ceph cluster.
I have this problem:
pveceph createmon
monitor address '10.10.10.3:6789' already in use by 'mon.2'
( mon 2 was the failed node )
I have done:
pveceph destroymon 2
monitor filesystem '/var/lib/ceph/mon/ceph-2' does not exist on...
Hello, i'm testing a 4 nodes Ceph Cluster:
Each node have two sata HD and two SSD for Journal.
-----------------------------------------------------------------------------------
ceph -w
cluster 1126f843-c89b-4a28-84cd-e89515b10ea2
health HEALTH_OK
monmap e4: 4 mons at...
Hello all,
'Quick question.
So, I know that Proxmox VE includes both Ceph, and GlusterFS support... however, I get the impression (and correct me if I am wrong on this) that Ceph is being pushed as the de-facto choice for HA/Clusters needing shared storage.
Red Hat however seems to favor...
I have a 3-node Ceph cluster configured as per the Proxmox wiki. Each node has 3 SSDs, as shown in the attached screenshot of the OSDs (2x 1TB and 1x 240 / 250GB).
I'm seeing quite a difference in usage between the drives; for example, osd.3 is 44.50% consumed, whereas osd.7 is just 32.07%...
Little bit a ceph beginner here.
I followed the directions from Sébastien Han and built out a ceph crushmap with HDD and SSD in the same box. There are 8 nodes each contributing an SSD and an HDD.
I only noticed after putting some data on there I goofed and put a single HDD in the SSD group...
Hello,
i'm testing a 4 node ceph ( 8 OSD ).
Each node have
2 HD Sata 4TB
2 SSH 150G ( dedicated to the ceph Journal ).
Is it correct this Pool config ?
Size 4
Min Size 3
Crush ruleset 0
pg_num: 200
( i have userd: http://ceph.com/pgcalc/ )
Probably i haven't understund the Size parameter...
Hi,
I've set up a 3 nodes cluster with CEPH (1 disk/node) and High availability. I'm in the testing step now (not in prod) and I spend some time to broke the config and check how proxmox recovers.
I fall in a freezed situation where:
- I have a separate network for CEPH
- my linux VM is running...
I have a cluster of 3 nodes proxmox VE 4.4-12/e71b7a74 with CEPH storage set up (one pool).
I've created a Linux CentOS7 VM in the CEPH storage pool
I've added a disk in this VM, located in the same CEPH storage pool, to store data with the "No Backup" flag because I do not want to backup these...
Just setting up a new 8 node cluster.
Each node offers two OSDs.
Looking at this what I am experiencing is that I seem to be capped at 14 OSDs for the whole cluster.
I was curious if this is just a change to Ceph.pm because I found this line:
pg_bits => {
description =>...
I was trying to get some OSDs to appear and running into a problem getting a monitor running on each node in a hyper converged ceph environment with 8 nodes.
I was looking for the error unable to find usable monitor id quite a bit and hitting some walls with my results so I looked at...
Hi,
I need your help. I’m getting very poor performance.
I have 3 nodes Proxmox Cluster setup with HP DL580 g7 Server. Each server has dual port 10 Gbps NIC.
Each node has 4 x 15K 600 2.5 SAS and 4 X 1 TB 7.2k SATA
Each node has following Partitions ( I'm use Logical Volume as OSD):
Node 1...
We have a cluster of 3 Proxmox servers and use Ceph as underlying storage for the VMs. It's fantastic and hasn't given us any trouble so far.
As our usage rises and available capacity dimishes, I'm starting to wonder what actually happens in the event of a failure. I'm not too worried about an...
We have created two-nodes Proxmox v4.4 cluster with Ceph Hammer pool running on the same nodes.
About 6 weeks it has been working as expected, but today we were facing continuous local blackout in our office, and both cluster nodes were powered off accidentally and unexpectedly.
After this...
Hallo zusammen,
ich hoffe mir kann hier jemand bei CephFS weiter helfen:
Wie oben geschrieben habe ich das Problem, dass in meinem Proxmox Cluster mit CephFS die Daten auf den OSDs sehr ungleich verteilt werden. Im folgenden das Setup:
Aktuell 4 Server (5. ist in Planung). Jeder Server hat...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.