Search results

  1. F

    Ceph - High apply latency on OSD causes poor performance on VM

    Hi nethfel, My ceph06 box is a new OSD server which is older than the others (HP G6, Raid card P410i, 8GB RAM, 4 x 300GB for OSD), but strangely has the best latency numbers. So it's the reason why I think there is probably a hardware problem. I have also seen that the options format et mount...
  2. F

    Ceph - High apply latency on OSD causes poor performance on VM

    Hi, Sorry for the error in ceph.conf : the value of the 2 parameter is actually 4 # ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep threads "osd_op_threads": "4", "osd_disk_threads": "4", We have 1 dedicated SSD of 240 GB per OSD node for journals (1 SSD of 240...
  3. F

    Ceph - High apply latency on OSD causes poor performance on VM

    Hi, Since we have installing our new Ceph Cluster, we have frequently high apply latency on OSDs (near 200 ms to 1500 ms), while commit latency is continuously at 0 ms ! In Ceph documentation, when you run the command "ceph osd perf", the fs_commit_latency is generally higher than...
  4. F

    Proxmox 3.3 : smbios settings option doesn't accept space character

    Hi, When I create a VM in Proxmox GUI, I want to specify some SMBIOS settings like Manufacturer and Product (VM options). The problem is that GUI doesn't accept space character in the input field. For example, on want to enter Manufacturer value "Microsoft Corporation", but it's impossible to...
  5. F

    Ceph - High latency on VM before OSD maked down

    Hi, I think the solution is in the "mon osd adjust heartbeat grace" parameter. When I have tested rebuild the first time, I had a lot of latency on OSDs due to, I think, the "OSD max backfills" parameter (default : 10, changed to 1). Then, the time before the OSDs was marked down could last...
  6. F

    Ceph - High latency on VM before OSD maked down

    Hi, Here the result of the command : # ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok config show | grep reporters "mon_osd_min_down_reporters": "1", I have decompiled the crushmap and the result is the same as your crushmap, and the same as the display in Proxmox GUI : # begin crush...
  7. F

    Ceph - High latency on VM before OSD maked down

    Hello I test my new ceph installation with 3 nodes and 3 OSDs on each node. I have an other proxmox cluster with 1 Windows VM with disk mapped on ceph. When I stop 1 ceph node, there is near 1 minute before the 3 OSDs goes down (I think it's normal). The problem is that the disk access in VM...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!