Search results

  1. G

    virtio disk driver windows2016 rate limited ?

    Hi Folks, Just migrated ceph totally to bluestore a test with windows2016 Server has good results, but i think limiting component is virtio driver! see also https://forum.proxmox.com/threads/virtio-ethernet-driver-speed-10gbite.35881/ concering Ethernet speed ... I see no "tunables" to...
  2. G

    Migrating to bluestore now ? or later ?

    tom, just forgot to adopt my signature .... please pardon my dust :)
  3. G

    Migrating to bluestore now ? or later ?

    Tom, Cluster is up-to-date with latest fixes from today. all 4 nodes rebooted ( this fixes the refresh problem in gui as stated earlier ....) I followed ceph instructions as stated in my initial post ... and i asked shall i do so or which recommendations do you have .... how to accomplish a...
  4. G

    Migrating to bluestore now ? or later ?

    Tom, just updated cluster and now I perform a inplace upgrade to bluestore some strange results in gui! only after shift reload in chrome i get ceph health results .... my procedure for each osd will be: ID=$1 echo "ceph osd out $ID" ceph osd out $ID # wait to start ceph remapping all things...
  5. G

    Migrating to bluestore now ? or later ?

    shall i change sources in apt ? current: cat ceph.list deb http://download.proxmox.com/debian/ceph-luminous stretch main to pvetest instead of main ?
  6. G

    Migrating to bluestore now ? or later ?

    Hi Tom, any exact release dates for 12.2 ? this week is almost gone .... regards Gerhard
  7. G

    Migrating to bluestore now ? or later ?

    Fine Tom ! would you recommend these steps in ceph documentation ? also a question aside this... are my x3pro mellanox cards RDMA capable ? this would speed up ceph significant i guess ...
  8. G

    Migrating to bluestore now ? or later ?

    Hi Folks, shall i migrate from filestore to bluestore following this article ? http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/ or wait for ceph 12.2.x ? currenty pve has 12.1.2 luminous rc ... but how long to wait ? any release plans for 12.2 ? regards
  9. G

    qemu-2.7 iothread

    ivan, thanks for bringing this up, same here, we have to disable iothread to be able to backup our rbd v-disks, really annoying :(
  10. G

    recent update this morning breaks ceph

    traffic on mellanox 56GbitE is no issue i guess why should i remove redundancy ?
  11. G

    recent update this morning breaks ceph

    dietmar, why not all 4 nodes ? any technical reason ?
  12. G

    recent update this morning breaks ceph

    just started a all scrub ... to force things to be clean ... hopefully :) ceph pg dump | grep -i active+clean | awk '{print $1}' | while read i; do ceph pg deep-scrub ${i}; done
  13. G

    recent update this morning breaks ceph

    ok i have done this on all 4 nodes now. shall i wait for end of scrubbing ? and then reboot whole cluster ? ceph -s cluster: id: cb0aba69-bad9-4d30-b163-c19f0fd1ec53 health: HEALTH_WARN 68 pgs not deep-scrubbed for 86400 417 pgs not scrubbed for 86400...
  14. G

    recent update this morning breaks ceph

    Hi I installed v5 beta and then v5 release. i had no problems with updates so far, exept this morning. I scanned for new updates, and alot of ceph updates popped up.... i installed them on all 4 machines .. now i have no active mgr in gui, i suppose i shredded ceph completly.... osds an mons...
  15. G

    [SOLVED] PVE 5 Live migration downtime degradation (2-4 sec)

    Petr, perhaps this is a arp problem within migration task ? would be nice someone from staff came across with a explanation :(
  16. G

    [SOLVED] PVE 5 Live migration downtime degradation (2-4 sec)

    icmp from a cluster host to vm ping 192.168.221.151 PING 192.168.221.151 (192.168.221.151) 56(84) bytes of data. 64 bytes from 192.168.221.151: icmp_seq=1 ttl=128 time=0.232 ms 64 bytes from 192.168.221.151: icmp_seq=2 ttl=128 time=0.224 ms 64 bytes from 192.168.221.151: icmp_seq=3 ttl=128...
  17. G

    Proxmox 5.0 and nvme

    hm must be bios issue in your case ... can you see device(s) in bios ? perhaps you need a bios update to operate also nmve disks ? or have you plugged in the device on wrong pci slot ?
  18. G

    [SOLVED] PVE 5 Live migration downtime degradation (2-4 sec)

    I have no issue on my cluster this is lightning fast :) all on shared ceph storage, no zfs involved. task started by HA resource agent 2017-07-25 15:00:53 starting migration of VM 101 to node 'pve03' (192.168.221.143) 2017-07-25 15:00:53 copying disk images 2017-07-25 15:00:53 starting VM 101...
  19. G

    3-node Cluster with Ceph, sanity check?

    only my manually compiling the crush map for my knowledge ... gui has no feature to accomplish this :( good luck !

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!