Search results

  1. M

    CEPH 3 nodes Cluster : data size available ?

    What kind of throughput do you get inside the VM's?
  2. M

    CEPH 3 nodes Cluster : data size available ?

    How will does live migration and high availability between the nodes work?
  3. M

    Blue screen with 5.1

    Agreed. The new kernel fixed the problem
  4. M

    Proxmox CEPH Cluster's Performance

    I am quite curious, have you upgraded to Bluestore? And if so, what are your speeds like now?
  5. M

    how to change CEPH OSD logical sector size?

    I have seen some bad performance in the past where 512 emulation is used. Hence, I want to switch it and test to see what happens.
  6. M

    Can PVE cluster decide where best to start a new VM?

    I haven't seen such an option in Proxmox.
  7. M

    how to change CEPH OSD logical sector size?

    https://www.ibm.com/developerworks/library/l-4kb-sector-disks/
  8. M

    VIRTIO SCSI very few iops on write for Windows VMs

    I have noticed the same recently. Using IDE for Windows clients make them blazing fast, whereas VirtIO and SCSI disk drives are very very slow.
  9. M

    Mounting ceph pool for total beginner

    Hi, I tried this but got the following error: rbd: error opening default pool 'rbd' Ensure that the default pool has been created or specify an alternate pool name. root@virt1:~# more /etc/ceph/rbdmap # RbdDevice Parameters #poolname/imagename...
  10. M

    how to change CEPH OSD logical sector size?

    Hi, Does anyone know how to change the Logical Sector size of a CEPH OSD? I have searched the forums but cannot find a solution.
  11. M

    Proxmox, Ceph and local storage performance

    @mateusz did you ever get to the bottom of your performance problem?
  12. M

    how to rename a ceph OSD?

    Hi, Is it possible to rename a CEPH OSD? ^Croot@virt3:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 87.33948 root default -3 29.11316 host virt1 1 hdd 7.27829 osd.1 up 1.00000 1.00000 2 hdd 7.27829 osd.2 up...
  13. M

    [SOLVED] Ceph SSD pool slow performance

    We have a 500Mb wireless link to another office, about 4 KM away. ping times are about 2ms. But it's not the same 10Gbe storage network that we have for the CEPH storage.
  14. M

    [SOLVED] Ceph SSD pool slow performance

    I am curios, how would you configure CEPH, not to run slow, if the servers are in different datacenters like this?
  15. M

    Apply/Commit Latency on Ceph

    I ran a straight ping between the host nodes, hence also aksing the OP what method he used for his measurements. I'll install and test with ioping quickly.
  16. M

    Apply/Commit Latency on Ceph

    What latency are you referring to? Network latency between the servers? How did you measure the latency? In a similar setup, our latency on the storage network is 0.011ms average
  17. M

    cannot stop ceph OSD from command line

    Aha! So I can only run that command from the host node on which the OSD resides, not from any host node. Gotcha ;)
  18. M

    How to pin CPU cores to ceph OSD?

    Thanx. How does Proxmox PIN the CPU cores?