VM with FreeBSD

rpygu

New Member
Nov 11, 2013
4
0
1
Hello. In out work we are using FreeBSD + Postgres as database server.
We bought a new server based on the C602-A PCH chipset and decided to use Proxmox. I installed Proxmox 3.2, created new VM with FreeBSD 9.2. I installed Postgres 9.3.1 and restored dump of our database. I wrote big SQL-query to analyse performance of the VM. Average result was 120 msec (without Proxmox result – 5 msec). I made another VM with FreeBSD 8.4 and same Postgres. Average result is 50-55 msec.
I tried to use Virtio drivers, but result didn't change.
Please tell me, is it normal that performance so poor? Can i improve performance of VM with FreeBSD.


In all cases i used default postgres.conf. All VM had max CPU and memory.
 
Hi,

which kind and format of storage are you using?
many others will help you ì, but it would help to see some usual details:

- "pveperf" output
- vm .conf file
- "pveversion -v" output

Marco
 
I tried to use RAW and QCOW2 formats but didn't notice difference.

Code:
root@proxmox-1:~# pveperf
CPU BOGOMIPS:      95996.40
REGEX/SECOND:      941352
HD SIZE:           9.84 GB (/dev/mapper/pve-root)
BUFFERED READS:    167.61 MB/sec
AVERAGE SEEK TIME: 3.30 ms
FSYNCS/SECOND:     1137.27
DNS EXT:           149.18 ms
DNS INT:           1.10 ms (relant)

Code:
root@proxmox-1:~# pveversion
pve-manager/3.1-3/dc0e9b0e (running kernel: 2.6.32-23-pve)

FreeBSD 9.2:
Code:
root@proxmox-1:~# cat 100.conf
bootdisk: ide0
cores: 8
ide2: none,media=cdrom
localtime: 1
memory: 49152
name: hq1-92
net0: virtio=92:7B:FB:31:90:C5,bridge=vmbr0
ostype: other
scsihw: virtio-scsi-pci
sockets: 2
virtio0: local:100/vm-100-disk-1.qcow2,format=qcow2,size=20G
virtio1: storage-1:vm-100-disk-1,size=512G
virtio2: storage-ssd:vm-100-disk-1,size=32G

FreeBSD 8.4:
Code:
root@proxmox-1:~# cat 101.conf
boot: cdn
bootdisk: ide0
cores: 8
ide2: none,media=cdrom
memory: 49152
name: hq1-84
net0: virtio=CE:04:47:CB:F2:67,bridge=vmbr0
ostype: other
sockets: 2
virtio0: local:101/vm-101-disk-1.qcow2,format=qcow2,size=48G
virtio1: local:101/vm-101-disk-2.qcow2,format=qcow2,size=24G
 
> FSYNCS/SECOND: 1137.27
this is not really high... that could be part of the cause

post also some details about local storage (I'm not expert, but hopefully udo will drop in here...:-D he's great and expert...)

> pve-manager/3.1-3/dc0e9b0e (running kernel: 2.6.32-23-pve)

there is a newer version: proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)

you could try that also that

anyway, you should post the full output of "pveversion -v"

Marco
 
post also some details about local storage (I'm not expert, but hopefully udo will drop in here...:-D he's great and expert...)
We are using Adaptec RAID 5405 + 2 SAS 147GB Seagate ST9146853SS (Mirror).
But i think it is not a reason. Because i created new VM with Debian 7.2 with Postgres on this storage, restored dump and run SQL-query. Result is 6-8 msec.

Code:
[FONT=Courier]root@proxmox-1:~# pveversion -v[/FONT]
[FONT=Courier]proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)[/FONT]
[FONT=Courier]pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)[/FONT]
[FONT=Courier]pve-kernel-2.6.32-23-pve: 2.6.32-109[/FONT]
[FONT=Courier]lvm2: 2.02.98-pve4[/FONT]
[FONT=Courier]clvm: 2.02.98-pve4[/FONT]
[FONT=Courier]corosync-pve: 1.4.5-1[/FONT]
[FONT=Courier]openais-pve: 1.1.4-3[/FONT]
[FONT=Courier]libqb0: 0.11.1-2[/FONT]
[FONT=Courier]redhat-cluster-pve: 3.2.0-2[/FONT]
[FONT=Courier]resource-agents-pve: 3.9.2-4[/FONT]
[FONT=Courier]fence-agents-pve: 4.0.0-1[/FONT]
[FONT=Courier]pve-cluster: 3.0-7[/FONT]
[FONT=Courier]qemu-server: 3.1-1[/FONT]
[FONT=Courier]pve-firmware: 1.0-23[/FONT]
[FONT=Courier]libpve-common-perl: 3.0-6[/FONT]
[FONT=Courier]libpve-access-control: 3.0-6[/FONT]
[FONT=Courier]libpve-storage-perl: 3.0-10[/FONT]
[FONT=Courier]pve-libspice-server1: 0.12.4-1[/FONT]
[FONT=Courier]vncterm: 1.1-4[/FONT]
[FONT=Courier]vzctl: 4.0-1pve3[/FONT]
[FONT=Courier]vzprocps: 2.0.11-2[/FONT]
[FONT=Courier]vzquota: 3.1-2[/FONT]
[FONT=Courier]pve-qemu-kvm: 1.4-17[/FONT]
[FONT=Courier]ksm-control-daemon: 1.1-1[/FONT]
[FONT=Courier]glusterfs-client: 3.4.0-2[/FONT]
 
We are using Adaptec RAID 5405 + 2 SAS 147GB Seagate ST9146853SS (Mirror).

your FSYNCS/SECOND seems too low to me.

I have IBM servers with
- 2x73 GB 15 000 rpm 2.5-inch SAS (mirror)
- serveraid M5015
- 2xE5520 CPU @ 2.27GHz

but my FSYNCS/SECOND are much higher...

Code:
# pveperf
CPU BOGOMIPS:      72523.43
REGEX/SECOND:      854854
HD SIZE:           16.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    142.18 MB/sec
AVERAGE SEEK TIME: 4.19 ms
FSYNCS/SECOND:     3180.62
DNS EXT:           41.53 ms
DNS INT:           0.75 ms (apiform.to.it)

low FSYNCS/SECOND is always related to bad disk performance...

could it be a cache issue...?

let's hope in other opinions here...

[update] even my nfs (busy) server storage mounts give me 1300-1400 FSYNCS/SECOND...

try pveperf /mnt/networkmount for comparison (pveperf by default is referred to /)


Marco
 
Last edited:
Is the write cache active on your Adaptec? It looks like missing BBU so no active write cache. Do you have one of these: Adaptec Battery Module 800T?
 
when my IBM server got raid controller BBU failing it dropped from 3000+ to 60...
but the raid software (megaraid) automatically changes write cache:
- when bbu is OK => writeback
- when bbu is KO => writethrough

rpygu has 1100... not very high, but not that low as mine...

so, I'm not sure if it could be a bbu issue, but imho he should check if raid writeback is on and if there is any other issue with raid or disks.

- could also write cache for vm be involved?
- and vms are using 2 sockets, 8 cores, 50 gb ram each! on what real cpu, total real ram?

Marco
 
Yes, you were right. Write cache was disabled. I turn on it.
Code:
[FONT=Menlo]CPU BOGOMIPS:      95989.20[/FONT]
[FONT=Menlo]REGEX/SECOND:      943101[/FONT]
[FONT=Menlo]HD SIZE:           9.84 GB (/dev/mapper/pve-root)[/FONT]
[FONT=Menlo]BUFFERED READS:    310.39 MB/sec[/FONT]
[FONT=Menlo]AVERAGE SEEK TIME: 3.25 ms[/FONT]
[FONT=Menlo]FSYNCS/SECOND:     2830.02[/FONT]
[FONT=Menlo]DNS EXT:           92.48 ms[/FONT]
[FONT=Menlo]DNS INT:           1.08 ms[/FONT]
But then i checked SQL-query in FreeBSD 8.4 and result not changed. Still 55 msec.

My server config
Motherboad:
ASUSTeK Computer INC. Z9PR-D12
CPU: 2 x Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Memory: 64 Gb

As I wote I tried to install VM with Debian 7.2 and got amazing result - 5-6 msec. I think the issue in software, not in hardware.
 
Last edited:
ok, at least now you have better disk performance :-D

back to your problem: differences could be in any different package used by pve devs to make pve kvm & stuff. It could be kernel (and its optimization), qemu, kvm, a lot of stuff)

eg: pve is a debian based system BUT has special kernels derived from redhat, not debian. Proxmox devs adapt those kernels to create the great pve cluster system.
now, redhat kernels I think are quite "stable", and prefer reliability over performance and being on "the edge"... and then proxmox devs have to modify it, so it could be an even longer process.

what you could try to better do tests, is: install pve over your higly performing 7.2 (see http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy)
than you'll get a regular wheezy with packages for pve and additional pve kernels to the standard - debian - ones!

you should be able to
* boot regular debian kernel, run kvm (not pve's, but what you have in debian)
* boot pve kernel, run pve kvm

and compare performances: differences could be in any different package used by the two setups

My experience stops here, sorry... probably you should get help by more expert users here...

Marco
 
Hi Marco

For what I can share is that FreeBSD 10 will be the first release to ship with bhyve (their BSD-licensed hypervisor) that also relies on virtio - that said starting with 10, the FreeBSD devs will be getting more interested in virtio.
Their driver has made considerable (also stability-wise) progress starting 8.3+.

I've been mostly interested in the virtio NIC (pfSense) where it performs quite well (better than e1000). There was a presentation at AsiaBSDCon 2012 by Takeshi Hasegawa where it (likely) mentions what you may hit as problem - virtio_blk at least back in 2012 wasn't real up for speed on Linux KVM. :-\ (http://de.slideshare.net/TakeshiHasegawa1/runningfreebsdonlinuxkvm)

It seems you 8.4 VM uses IDE disks while 9.2 uses virtio - how does 9.2 behave with ide / scsi vs. current virtio disks? (and maybe you could give 10-BETA/RC a try since not all updates to driver get merged back from FreeBSD-CURRENT)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!