Hi Folks,
Just migrated ceph totally to bluestore
a test with windows2016 Server has good results, but i think limiting component is virtio driver!
see also https://forum.proxmox.com/threads/virtio-ethernet-driver-speed-10gbite.35881/ concering Ethernet speed ...
I see no "tunables" to...
Tom,
Cluster is up-to-date with latest fixes from today.
all 4 nodes rebooted ( this fixes the refresh problem in gui as stated earlier ....)
I followed ceph instructions as stated in my initial post ... and i asked shall i do so or which recommendations do you have ....
how to accomplish a...
Tom,
just updated cluster
and now I perform a inplace upgrade to bluestore
some strange results in gui!
only after shift reload in chrome i get ceph health results ....
my procedure for each osd will be:
ID=$1
echo "ceph osd out $ID"
ceph osd out $ID
# wait to start ceph remapping all things...
Fine Tom !
would you recommend these steps in ceph documentation ?
also a question aside this... are my x3pro mellanox cards RDMA capable ? this would speed up ceph significant i guess ...
Hi Folks,
shall i migrate from filestore to bluestore following this article ?
http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/
or wait for ceph 12.2.x ? currenty pve has 12.1.2 luminous rc ...
but how long to wait ? any release plans for 12.2 ?
regards
just started a all scrub ... to force things to be clean ... hopefully :)
ceph pg dump | grep -i active+clean | awk '{print $1}' | while read i; do ceph pg deep-scrub ${i}; done
ok i have done this on all 4 nodes now.
shall i wait for end of scrubbing ? and then reboot whole cluster ?
ceph -s
cluster:
id: cb0aba69-bad9-4d30-b163-c19f0fd1ec53
health: HEALTH_WARN
68 pgs not deep-scrubbed for 86400
417 pgs not scrubbed for 86400...
Hi
I installed v5 beta and then v5 release.
i had no problems with updates so far, exept this morning.
I scanned for new updates, and alot of ceph updates popped up.... i installed them on all 4 machines ..
now i have no active mgr in gui, i suppose i shredded ceph completly....
osds an mons...
icmp from a cluster host to vm
ping 192.168.221.151
PING 192.168.221.151 (192.168.221.151) 56(84) bytes of data.
64 bytes from 192.168.221.151: icmp_seq=1 ttl=128 time=0.232 ms
64 bytes from 192.168.221.151: icmp_seq=2 ttl=128 time=0.224 ms
64 bytes from 192.168.221.151: icmp_seq=3 ttl=128...
hm must be bios issue in your case ... can you see device(s) in bios ? perhaps you need a bios update to operate also nmve disks ?
or have you plugged in the device on wrong pci slot ?
I have no issue on my cluster this is lightning fast :)
all on shared ceph storage, no zfs involved.
task started by HA resource agent
2017-07-25 15:00:53 starting migration of VM 101 to node 'pve03' (192.168.221.143)
2017-07-25 15:00:53 copying disk images
2017-07-25 15:00:53 starting VM 101...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.