I think I may have found something.
I had an issue with diskspace and hence changed from replication x3 to x2 knowing the possible risks.
However it was to be temporary while I add more OSDs.
But when I add more OSDs to new servers now I am noticing very high IO wait and servers "freeze".
I...
I am reading on some posts that diskspace of disks are measure based on the smallest OSD disk in cluster?
So for example if we have the below on each node on 6 nodes.
2 x 2TB
1 x 500GB
Are we saying diskspace is lost? due to the 500GB.
Should we rather just remove the 500GB. We just had...
ok I see when using out I think its not a good idea only when using stop. We just did stop and out and then destroyed data on it using the "more" dropdown and it cleaned disks. We readded the disks and now all is well in the world once more :)
Hey guys
I have a question I forgot to test when we did our testing phase.
If we stop and out a disk but then realised we did the wrong disk. Can we just bring it back in a gain without "destroying the data" first on it?
Any risk in doing so. Will ceph just use the data on the OSD and just...
Hi guys
We have a stable good working ceph cluster with one ceph pool where all data is on. We have a few VMs running currently on that pool.
I noticed there was an option called KBRD and noticed that on some posts on forums it states that performance can be increased in enabled KBRD on the...
Hi
Some SAS disks that you buy on Ebay or Amazon come sometimes with firmware of NetApp on them. We had numerous issues with these. Just ensure that is not the case.
This link can help you fix it which I saved a few years back - just requires low level format due to sector size...
Hi guys
For now we have been creating a monitor on every server we setup. Not sure why we just have :)
Now lets say we have a 11 node cluster with a monitor on each - do you think that's overkill.
Also on a 4 node ceph cluster which we also have setup do you think 2 monitors would suffice as I...
https://forum.proxmox.com/threads/proxmox-ceph-what-happens-when-you-loose-a-journal-disk.24148/
As per above link I assume all data will go "poof" on the OSD aswell. But as its Ceph and you have other nodes that contain replicas of the data you just add the replacement SSDs in place and wipe...
Hi
I've been searching aswell as we are playing with ceph with some test servers atm but found the same issue with an external source on rook-ceph which sais some options need to be enabled:
https://github.com/rook/rook/issues/6964
bdev_async_discard and bdev_enable_discard on the osds...
I wanted to ask this question. Is this not related to the amount of data being written and read and only if it is maxed?
Looking at iotop each VM we will be hosting does around 25 to 50MB/s during busy periods of the day and we have around 15 VMs. So surely you are meaning it will only become...
Isnt that dependant on the amount of data?
The server has 4 x 1GB Ethernet ports
I could bond the dual 10Gb Network ports on the network cards as they come with 2 x 10Gb ports in each network card.
If we have a currently running Ceph Cluster with the following:
7 nodes with the following setup:
Dell R610 Servers
64 GB Memory
1 x 480GB PM863a SSD for Proxmox OS
5 x 600GB Enterprise 10K SAS Disks for OSDs
10Gb Ethernet Network
Dell H200 Card
Lets say these nodes are doing ok running only...
I found that even after a reboot it would still happen.
I then thought maybe its monitoring as nothing else queries lvm partitions except monitoring to check the diskspace.
I then disabled snmp and rebooted again and left it running since last night. Its over 8 hours and no grey out of vms on...
I notice that our 2 nodes in our cluster are greyed out now after update and restart of server.
I see this in logs:
ug 4 18:10:57 pve-2 systemd[1]: pvestatd.service: Found left-over process 21897 (vgs) in control group while starting unit. Ignoring.
Aug 4 18:10:57 pve-2 systemd[1]: This...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.