Search results

  1. Proxmox, Ceph and local storage performance

    Ceph30 also have separated 1G interface for cluster. As I wrote before, all SSD drives are identical. On each server there are 2 SSD, plase look at partition table for this drives: First with system, journals, and osd parted /dev/sda Disk /dev/sda: 199GB Sector size (logical/physical)...
  2. Proxmox, Ceph and local storage performance

    Yes, I'm sure, please look at output from ceph osd ls pool detail Did You mean avq, avio? Please look at charts ceph_reads and ceph_writes. It's parsed from ceph -w from today.
  3. Proxmox, Ceph and local storage performance

    ceph10: 2x E5504 @ 2.00GHz, 32GB RAM, 4x NetXtreme II BCM5709 Gigabit Ethernet (2 active) ceph15: 2x E5504 @ 2.00GHz, 32GB RAM, 4x NetXtreme II BCM5709 Gigabit Ethernet (2 active) ceph20: 2x E5410 @ 2.33GHz, 32GB RAM, 4x 82571EB Gigabit Ethernet Controller (2 active) ceph25: 2x E5620 @...
  4. Proxmox, Ceph and local storage performance

    Yes, but there is max 30% of network device usage. This is replicated (replica 3) pool with cache tier and journal on SSD. All SSDs drives are INTEL SSDSC2BX200G4. How can I check this? 4MB block-size on local storage gets 69MB/s read and 20MB/s write on SATA. OK
  5. Proxmox, Ceph and local storage performance

    Tested from live CD on my laptop, using fio and this config: [global] ioengine=rbd clientname=admin pool=sata rbdname=fio_test invalidate=0 # mandatory rw=randwrite bs=4k [rbd_iodepth32] iodepth=32 Result: write: io=2048.0MB, bw=7717.5KB/s, iops=1929, runt=271742msec So it's looks...
  6. Proxmox, Ceph and local storage performance

    There are no tunning on proxmox, upgraded from 4.0 last week (but on 4.0 the same symptoms). Fio 2.1.11 on localstorage (SATA) from hypervisor: bgwriter: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32 queryA: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=mmap...
  7. Proxmox, Ceph and local storage performance

    Hello, On our environment I see some performance issue, maybe someone can help me to find where is the problem. We have 6 servers on PVE4.4 with ca. 200VMs (Windows and Linux). All VM disks (rbd) are stored on separated Ceph cluster (10 servers, 20 SSD osd - cache tier and 48 HDD osd ). I...
  8. Proxmox 4. Cluster nodes being red :(

    Thank You for this info. In Sunday I moved dev cluster to vlan 2000, but this not help, after read this links, I enabled IGMP L2-general-quiter and now there is quorum on each cluster. Unfortunately the omping is not working (before and after L2 quiter function change), I suppose, it should...
  9. Proxmox 4. Cluster nodes being red :(

    Hello, Today I make shutdown of all proxmox servers, next start one by one. Each server join cluster and works, but only for 10 minutes, and quorum is down. IGMP Snooping now is enabled globally, dev cluster switched down. Corosync.log in attachment. pvecm status Version: 6.2.0 Config...
  10. Proxmox 4. Cluster nodes being red :(

    Propably IGMP Snooping is disabled. I'm looking at this now. Dev cluster wos recreated via pvecm. clusters have unique name. ('backup' and 'c01') Should I restart one-by-one starting from first node? /etc/pve/cluster.conf from 3.4 cluster <?xml version="1.0"?> <cluster name="c01"...
  11. Proxmox 4. Cluster nodes being red :(

    Hello, I have similar issue. We have 2 PVE clusters, dev and production. clusters are connected to the same switches, in other ip networks, but without vlans. Yesterday I upgraded dev cluster from PVE 3.4 to 4.1. Procedure from PVE wiki is end without problems, but after few minutes dev cluster...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!