Hello,
On our environment I see some performance issue, maybe someone can help me to find where is the problem.
We have 6 servers on PVE4.4 with ca. 200VMs (Windows and Linux). All VM disks (rbd) are stored on separated Ceph cluster (10 servers, 20 SSD osd - cache tier and 48 HDD osd ).
I do some IO test using fio from Linux VM (Linux test01 3.13.0-110-generic #157-Ubuntu SMP Mon Feb 20 11:54:05 UTC 2017 x86_64 x86_64 x86_64)
VM config is:
Network interfaces (1000MBps) are never utilized more than 30%.
Is this issue with PVE or some mistakes on ceph configuration? What config I should post here to give You more info?
Best Regards
Mateusz
On our environment I see some performance issue, maybe someone can help me to find where is the problem.
We have 6 servers on PVE4.4 with ca. 200VMs (Windows and Linux). All VM disks (rbd) are stored on separated Ceph cluster (10 servers, 20 SSD osd - cache tier and 48 HDD osd ).
I do some IO test using fio from Linux VM (Linux test01 3.13.0-110-generic #157-Ubuntu SMP Mon Feb 20 11:54:05 UTC 2017 x86_64 x86_64 x86_64)
- VM disk stored on ceph:
- READ: io=540704KB, aggrb=6199KB/s, minb=481KB/s, maxb=3018KB/s, mint=34097msec, maxt=87216msec
- WRITE: io=278496KB, aggrb=8167KB/s, minb=479KB/s, maxb=14077KB/s, mint=18622msec, maxt=34097msec
- VM disk stored on one (raid0) SATA drive
- READ: io=540704KB, aggrb=736KB/s, minb=333KB/s, maxb=361KB/s, mint=49232msec, maxt=733843msec
- WRITE: io=278496KB, aggrb=5656KB/s, minb=332KB/s, maxb=11234KB/s, mint=23334msec, maxt=49232msec
- VM disk stored on one (raid0) SAS drive (15k)
- READ: io=540704KB, aggrb=1597KB/s, minb=498KB/s, maxb=782KB/s, mint=32905msec, maxt=338542msec
- WRITE: io=278496KB, aggrb=8463KB/s, minb=496KB/s, maxb=39390KB/s, mint=6655msec, maxt=32905msec
VM config is:
agent: 1
balloon: 0
boot: c
bootdisk: virtio0
cores: 4
cpu: host
hotplug: 0
ide2: none,media=cdrom
memory: 8192
name: test
net0: virtio=32:65:61:xx:xx:xx,bridge=vmbr0,tag=2027
numa: 1
ostype: l26
virtio0: ceph01:vm-2027003-disk-3,cache=none,size=10G
virtio1: ceph01:vm-2027003-disk-2,cache=none,size=10G
scsihw: virtio-scsi
smbios1: uuid=8c947036-c62c-4e72-8e4f-f8d1xxxxxxxx
sockets: 2
Proxmox Version (pveversion -v):balloon: 0
boot: c
bootdisk: virtio0
cores: 4
cpu: host
hotplug: 0
ide2: none,media=cdrom
memory: 8192
name: test
net0: virtio=32:65:61:xx:xx:xx,bridge=vmbr0,tag=2027
numa: 1
ostype: l26
virtio0: ceph01:vm-2027003-disk-3,cache=none,size=10G
virtio1: ceph01:vm-2027003-disk-2,cache=none,size=10G
scsihw: virtio-scsi
smbios1: uuid=8c947036-c62c-4e72-8e4f-f8d1xxxxxxxx
sockets: 2
proxmox-ve: 4.4-82 (running kernel: 4.4.40-1-pve)
pve-manager: 4.4-12 (running version: 4.4-12/e71b7a74)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.40-1-pve: 4.4.40-82
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-109
pve-firmware: 1.1-10
libpve-common-perl: 4.0-92
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-1
pve-docs: 4.4-3
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-94
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-3
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
ceph: 9.2.1-1~bpo80+1
pve-manager: 4.4-12 (running version: 4.4-12/e71b7a74)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.40-1-pve: 4.4.40-82
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-109
pve-firmware: 1.1-10
libpve-common-perl: 4.0-92
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-1
pve-docs: 4.4-3
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-94
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-3
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
ceph: 9.2.1-1~bpo80+1
Network interfaces (1000MBps) are never utilized more than 30%.
Is this issue with PVE or some mistakes on ceph configuration? What config I should post here to give You more info?
Best Regards
Mateusz