Hi,
the autoscaler increased the number of PGs on our Ceph storage (Hardware like this but 5 nodes).
As soon as the backfill starts the VMs become unusable and we startet killing OSD processes that cause high read io load. So as in this picture we would kill the ceph-osd process working on...
I have 2 ceph nodes. Each has a mon and mgr installed. Whenever I shutdown any one mon-instance on any of my nodes ceph becomes completely unresponsive until I start that mon again. Is this normal or can I fix this?
I don't know how to fix this. I'm just starting out with ceph. It just keeps on showing active+clean+remapped. It doesn't fix it over time. How do I fix this? I just use the default replication rule for my pools.
I am looking to some guidance to finalize the setup of a 3-nodes Proxmox cluster with Ceph and shared ISCSI storage. While it's working, I am not really happy with the ceph cluster resilienc and I am looking for some guidance.
Each nodes have 2x10GbE ports and 2x480GB SSD dedicated for ceph...
Hi Team!
I reconfigured a server from scratch.
Then installed the ceph package but did cancel the configuration after the install, so it could use the setup of the already configured setup.
Then made it join the cluster.
now I cannot configure it with the GUI, and have the got timeout (500)'...
Hello
I realized that I'm deleting data from the vms but this space is not being released in Ceph. I found in the documentation that I should do a Fstrim in the RDB but I can't find its assembly text such as: fstrim /mnt/ myrbd.
Any idea?
Thank you
Hello,
On this moment we have:
6 x Proxmox Nodes
2 x 10 cores (2 nodes have 2 x 14 cores)
512 GB RAM
4 x 10 GB (2 x 10 GB LACP for network en corosync and 2 x 10 GB LACP for Storage)
3 x Ceph Monitor
Dual Core
4 GB RAM
2 x 10 GB LACP
4 x Ceph OSD
2 x 6 Core 2,6 Ghz
96 GB RAM
4 x 10 GB (2 x...
Hi,
After I installed proxmox I decided to tinker around with ceph. Some things didn't work out and I removed ceph from the proxmox node. After stopping all the ceph services I removed it with 'pveceph purge'. That worked!
Now when I tried to reconfigure ceph I keep getting this error "Could...
I have had this issue for a while now, and after upgrading to Proxmox 6 and the new Ceph it is still there.
The problem is that the Ceph Display page shows that I have 17 OSD's when I only have 16. It shows the extra one as being down and out. (Side note, I do in fact have one OSD that is down...
How do you define the Ceph OSD Disk Partition Size?
It always creates with only 10 GB usable space.
Disk size = 3.9 TB
Partition size = 3.7 TB
Using *ceph-disk prepare* and *ceph-disk activate* (See below)
OSD created but only with 10 GB, not 3.7 TB
Commands Used
root@proxmox:~#...
Currently as all nodes are under load and memory consumption is around 90-95% on each of them.
CEPH cluster details:
* 5 nodes in total, all 5 used for OSD's 3 of them also used as monitors
* All 5 nodes currently have 64G ram
* OSD's 12 disks in total per node - 6x6TB hdd and 6x500G ssd.
*...
Hello Sirs.
Has there anyone encountered the same issue as mine?
I found one of OSDs in our production proxmox CEPH cluster environment which had high apply latency(around 500ms.)
It caused our CEPH cluster performance to degrade. After I restarted the OSD, the cluster performance is back to...
Hi there...
I have 2 PVE nodes and 5 servers as CEPH Storage, also building under PVE Servers.
So I have two cluster:
1 cluster with 2 PVE nodes, named PROXMOX01 and PROXMOX02.
* PROXMOX01 runs proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve) pve-manager: 5.3-11 (running version...
Hello i'm triyng to configure an architecture with 4 phisical pve nodes with ceph monitor and i want to add an external monitor on an ubuntu 18.04 container
is it possible to install an external monitor for ceph failover reason?
and if is it, how?
Thanks in advance
regards
We're looking to migrate away from a large OnApp installation, and Proxmox is looking to be our solution. We have quite a large budget to get this done properly, so we were hoping if someone would be able to give us some best practices.
Our biggest concerns have been around Ceph within PVE...
I've currently got a 4 node cluster running Ceph on Proxmox 5.1 and noticed recently I'm getting a lot of blocked requests due to request_slow.
For example:
2019-02-12 11:47:33 cluster [WRN] Health check failed: 6 slow requests are blocked > 32 sec (REQUEST_SLOW)
2019-02-12 11:47:47 cluster...
hi, greetings, i am a proxmox 5.x user and i just built a proxmox cluster with 3 node servers. besides the cluster I also use ceph on proxmox for HA Failove, which aims to make server management easier, especially in terms of server maintenance. now the server is running smoothly with good...
Hi,
I have 3 cluster servers working fine with Ceph storage and all VM's too. Now i have 2 new server joined the cluster successfully but by mistake i have run command ceph purge on cluster server 4 that's lead to ceph storage on all nodes down and the ceph.conf file doesn't exist any more.
So...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.