Search results

  1. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    Thanks aaron, very much for the continued help it provided a lit path in the darkness.... It has been two weeks of 50% trial and error and 50% iterative planned improvement. We tried everything aaron suggested and more; however, as aaron stated, HDD's are, after all, HDDs and we were never...
  2. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    Thanks Aaron, so we increased out pg's to 128 based on the calculator and current percentage of us. Of course with the autoscaler it dropped it back down to 32. So we have set the autoscaler to warn and its overbalancing. after we will run more tests. We still see over 50% wa and some time as...
  3. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    Ok i have confirmed that our switches can support 9214 bytes and have updated all the configs for the private network on all Ppve hosts. Thanks aaron for all the recommendations. We still do seem to have some good news and some bad. We still see iowa states at 3-12, then to 0 with the server...
  4. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    It is actually doing a balance at the moment. its been going for about to hours: Recovery/ Rebalance: 96.31% (14.58 MiB/s - 1h 23m 28.7s left)
  5. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    I did just check our old cluster and it is also at 1500, one noticeable difference ids there is 0/0 apply/commit latency where as the new cluster is between 20-60 with the average being around 30
  6. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    I have asked if the provider switch support jumbo frames, our current net conf is at 1500
  7. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    Connecting to host 10.62.2.98, port 5201 [ 5] local 10.62.2.97 port 39472 connected to 10.62.2.98 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.62 MBytes [ 5] 1.00-2.00 sec 1.08 GBytes 9.26...
  8. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    root@pve1:~# fdisk -l Disk /dev/sde: 446.63 GiB, 479559942144 bytes, 936640512 sectors Disk model: PERC H750 Adp Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 262144 bytes / 262144 bytes Disklabel type: gpt Disk...
  9. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    Thank you for the assistance i hope this helps. We provider did tell us that the ssd were enterprise/datacenter drives and it s reputable hosting provider, although the storage engineers had no real experience with ceph.
  10. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.62.2.97/27 fsid = 460edbc2-56d9-44d0-a07d-891edffe6f0b mon_allow_pool_delete = true mon_host = 10.62.2.97 10.62.2.99 10.62.2.101 10.62.2.98...
  11. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    boot: order=scsi0 cores: 1 memory: 32000 name: 2.31.zab1 net0: virtio=C2:63:E0:D8:7F:A9,bridge=vmbr93,firewall=1,tag=3002 numa: 0 ostype: l26 scsi0: images_2:vm-300231-disk-0,iothread=1,size=32G scsi1: images_2:vm-300231-disk-1,iothread=1,size=5G scsi2...
  12. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 36.38794 - 36 TiB 1.9 TiB 1.9 TiB 334 KiB 16 GiB 35 TiB 5.17 1.00 - root default -3 7.27759 - 7.3 TiB 314...
  13. B

    ceph across 5 hosts and 20 osds on spindles no iowa on pve's but vm's very high even single vm

    Hi Everyone, We have a constant write (4-10MBs, total) application with ~ 120 vm's and burned though ssd's in just under 2 years with our dedicated hosting provider so they recommended we got back to spindles. We have built 5 new servers as follows: high core count, 256G RAM, 480 SSD for the...
  14. B

    Hyper-Converged cluster on dissimilar hardware

    After running deep scrubs and enabling balancing all of the errors are now gone and the cluster is in good health. I'm still relatively sure this is not an OSB setup as I have read there may be better ways to organize buckets? but again my experience with ceph is hours old. I'm going to start...
  15. B

    Hyper-Converged cluster on dissimilar hardware

    Hi, We have the following servers mentioned below, they can not be changed as we are using MaaS and they are configured this way. I realize this is not optimal but would prefer to spend time on figuring out what can work with this hardware layout and how can we leverage what we have. I am not...