Search results

  1. M

    CEPH performance

    we used dd to ensure the single disc is running fast enough on the node itself. Furthermore we used dd inside of a vm, which shouldn't be that useless, or am I wrong? But: we also used fio: root@cloud-node11:/mnt# fio --filename=/dev/sdd --direct=1 --sync=1 --rw=write --bs=4k --numjobs=6...
  2. M

    CEPH performance

    root@cloud-node11:/mnt# cat /etc/ceph/ceph.conf [global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.16.70.0/24 fsid = 09fbf10f-836d-4bc2-b678-b78897966984c1 keyring = /etc/pve/priv/$cluster.$name.keyring...
  3. M

    CEPH performance

    You are right, but that shouldn't be the reason for poor performance...
  4. M

    Proxmox 4.4 all nodes rebooted

    Sorry. Yes indeed, we are working together on this issue. Meanwhile we changed the bindnetaddress to the network address.
  5. M

    Proxmox 4.4 all nodes rebooted

    It runs on our management network, which no other traffic except our access to the WebGUI of the Proxmox Cluster. As these accesses are normally never more than two at the same time I think this is not really noticable in terms of traffic. Ceph runs on a dedicatet Infiniband network.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!