Yes this is what I wanted to try first would syntax be "ceph osd unset noout" ?
And pm05 not being a part of ceph wouldn't be affected by this correct?
Strange thing is ceph still hasn't healed says no out set. we do have pm05 not attached to ceph as we are thinking about dismantling ceph as it has been problematic.
Awesome thanks i was just making sure the way I typed the info command was right. Attached you can see the vm 103 hardware screen where it shows disks 1 and 2 in use and not the other 3 /4/5 and in the OSD percentage you can see the one that is almost full preventing it from fully healing. It is...
disks 1 and 2 should be the good ones 3, 4 and 5 should be the bad ones what would syntax be for info so I can check would it be
rbd -p cephStor info vm-103-disk-1
seems like syntax is still hanging us up I can't even get rbd info to work can someone help me on the syntax here is a screen shot. You can see all the extra disks for vm103 I am just trying to run info so I can get info on which one is the right one and which is just filling up ceph.
The only thing I can think is when we did the above commands we didn't have a dash after the disk and it said invalid directory.
rbd info --pool name vm-xxx-diskx-> instead of disk-x
rbd rm --pool name vm-xxx-diskx -> instead of disk-x
I am wondering if that is what did it
Had a vm running on a nfs path and click moved disk to ceph. It had a issue in the move as it filled 1 OSD more then the others and halted the move. When I clicked stop I can't figure out how to delete the big copy I was hoping it would just go back to the path it was coming from but now my ceph...
I showed plenty of room on my ceph when i went to move a disk but its says 1 osd full so i stopped move but it still shows error how do i delete the move and go back to using the vm where it was
They both look like they are updated to the minute is this some how the live VM's and my consultant set it up wrong? It is only for these 2 VM's don't know if they are backups or the live VM's.
Here is my back up settings and shots of my 2 vm's 103 and 105 that were backed up this morning. I have these 2 VM's set to only back up once a week one on sat and one Sunday by itself as it is big and I am backing up sql daily. So strange as I can't find a reason looking at the settings that...
Well I was able to power it down and get it into the rack with no problems. I am learning little by little that my consultant was rushed in the install but this community has been great and I am slowly learning.
Seems like every time I take a node down it starts a cascade affect I need to move a server into a rack and wanted to know if I was missing something on what I need to do to take it down properly.
I am going to be doing omping I just want to make sure I get the syntax right
so I click on node 1 then shell
then run
omping -c 10000 -i 0.001 -F -q 10.10.10.1 10.10.10.2 ... or do I need to have the name of the node in there as well as in?
omping -c 10000 -i 0.001 -F -q PM01-10.10.10.1...
We had issues with everything being on the same power connect switch so we moved ceph storage to a dedicated switch a netgear jgs524ev2-200nas. I followed the white paper netgear has so I disabled igmp snooping status and block unknown multicast address to disable and set broadcast forwarding...
I came from Vmware and have been happy with Proxmox the one dark spot is it seems like my cluster is doing false fences. Where it just takes down a node randomly and fences the VM. Also if I take down one node manually it seems to restart all of my nodes which doesn't make any sense. I check the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.