My Issue was related to CephFS and probably upgrading to higher Ceph which I did back then and a huge number of small files on CephFS. The solution for me was to throw away CephFS and since then everything works with a charm. So, if you are not using CephFS there must be some other issue in your...
I am facing with the same issue. New cluster installation (no VMs running yet), 4 nodes, dedicated 10Gbps network, same IPs subnet for the corosync cluster. syslog full of messages:
Jul 31 20:22:04 node5 corosync[2103]: [KNET ] rx: host: 2 link: 0 is up
Jul 31 20:22:04 node5 corosync[2103]...
I've made now a test with VM as storage (all RBD) and storage dir exported with NFS. I mounted this NFS export to the running production mysql server, change backup script to export backups to this storage VM (NFS export) and the problem is gone. No slow ops, no high I/Os, and everything seems...
Oh, ok, understand. I guess you mean this setup https://pve.proxmox.com/wiki/High_Availability, right? I have not experience with HA VMs within the Proxmox cluster. I have to check it out in detail.
Thank you
OK, understand. As I mentioned, replacing Cephfs should be an option and I am open about it.
A single point of failure is not acceptable to us. I can go to "VM as storage with NFS distribution" solution but I need to handle HA. What are your recommendations here regarding data replication? I...
Hi, thank you for your reply.
1) understand, I try to change it to the 3 monitors, MGRs and MDS
2) eventually we can replace cephfs by RBD only, but I guess, RBD cannot be used the way like cephfs is, right? I mean, as a mounted filesystem and distributed across multiple VMs by NFS for example...
I have now started second mysq/mariadb which is still in Ceph (RBD) without mariadb service running (lets say mysql2). Just booted OS without any services or traffic. Then I run mariabackup on the only one production mysql VM which has it's data on local disk only dumping DB to the cephfs.
I/O...
Hello,
I would like to ask you for help because I am running out of ideas on how to solve our issue.
We run 4 nodes Proxmox Ceph cluster on OVH. The internal network for the cluster is built on OVH vRack with a bandwidth 4Gbps. Within the cluster, we use CephFS as storage for shared data that...
Well, thank you so much for your help and explanation. I have tried to add unused disk by follows your steps and I can confirm, everything is working correctly. Also thanks for your thoughts about network capacity. We will consider it and discuss it. Appreciate your help.
Hello guys,
it's not usual I ask for help publicly but I need your help and real experiences. I am pretty much a rookie with Ceph.
We have a ceph cluster in this setup:
- 3 nodes:
- node1: 2 used disks, 1 un-used (2 OSD)
- node2: 3 used disks (3 OSD)
- node3: 2 used disks (2 OSD)
There are 3...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.