performance simple cluster

frater

New Member
Mar 16, 2011
21
0
1
I'm still testing with proxmox, clustering & drbd.
I built 2 machines with each 8 GB of RAM a Core i5 Sandy bridge system with an extra PCI Express Intel Gigabit NIC.
The bootdisks are 10.000 RPM raptors and for the DRBD cluster I used Seagate Barracuda 7200rpm 1 TB disks. I was planning to replace the boot disks for 45 GB SSD disks.

I'm using the Intel NIC with a short direct connection for syncing the DRBD-cluster.
I followed this wiki for my DRBD-setup.
http://pve.proxmox.com/wiki/DRBD

I didn't have the time yet to fully study which would be the best in this case.
I also didn't understand the rate-limit of 30M.
AFAIK there's nothing else going over this connection than my DRBD.
I assumed this ratelimit was used if normal data is going through that same NIC (could someone clarify?).
I changed it into 90M.

I have installed Microsoft SBS 2011 as the only client assigning it 6GB of RAM and 140 GB harddisk.

My problem now is that I'm a bit disappointed with the performance of this setup.
Before I was running SBS 2008 on a 1st generation i5 with 4 GB without any virtualization and it was performing much and much faster than this setup.

Is this to be expected and what is probably the bottleneck?
I would really like to have some tips to improve the performance without really making it into a much more expensive setup.

My main reason for virtualization is the hardware abstraction which makes it easier to migrate the system and the RAID-1 (DRBD) using normal SATA-disks.
 
# pveversion -v
pve-manager: 1.8-15 (pve-manager/1.8/5754)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-32
pve-kernel-2.6.32-4-pve: 2.6.32-32
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-11
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.0-3
ksm-control-daemon: 1.0-5

my /etc/qemu-server folder is empty.

I'm sorry, but at this moment I can't provide you with any benchmarks.
 
so you do not have any KVM guest.?
Sorry, I wasn't aware what to find there and logged into the primary node of the cluster.
Although I installed the SBS there, I later did a live migration test to the secondary node.

Code:
proxmox-2:~# cat /etc/qemu-server/101.conf
name: SmallBusiness_SBS
ide2: none,media=cdrom
bootdisk: ide0
ostype: w2k8
ide0: DRBD0:vm-101-disk-1
memory: 6144
onboot: 1
sockets: 1
cores: 2
boot: dc
freeze: 0
cpuunits: 1000
acpi: 1
kvm: 1
vlan0: rtl8139=96:8B:B9:94:2E:16
hostusb: 3538:0059
Code:
 proxmox-2:~# pveversion -v
pve-manager: 1.8-15 (pve-manager/1.8/5754)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-32
pve-kernel-2.6.32-4-pve: 2.6.32-32
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-11
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.0-3
ksm-control-daemon: 1.0-5
Code:
proxmox-2:~# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 42
model name      : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
stepping        : 7
cpu MHz         : 3092.567
cache size      : 6144 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips        : 6185.13
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 42
model name      : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
stepping        : 7
cpu MHz         : 3092.567
cache size      : 6144 KB
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
apicid          : 2
initial apicid  : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips        : 6185.91
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 42
model name      : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
stepping        : 7
cpu MHz         : 3092.567
cache size      : 6144 KB
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
apicid          : 4
initial apicid  : 4
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips        : 6185.90
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 3
vendor_id       : GenuineIntel
cpu family      : 6
model           : 42
model name      : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
stepping        : 7
cpu MHz         : 3092.567
cache size      : 6144 KB
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4
apicid          : 6
initial apicid  : 6
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips        : 6185.90
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:
Code:
proxmox-2:~# cat /etc/drbd.conf
# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

#include "drbd.d/global_common.conf";
#include "drbd.d/*.res";

global { usage-count no; }
common { syncer { rate 90M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;     # wfc-timeout can be dangerous (http://forum.proxmox.com/threads/3465-Is-it-safe-to-use-wfc-timeout-in-DRBD-configuration)
                degr-wfc-timeout 60;
                become-primary-on both;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "q1w2e3r4";
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect;
        }
        on proxmox-1 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 172.20.0.5:7788;
                meta-disk internal;
        }
        on proxmox-2 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 172.20.0.6:7788;
                meta-disk internal;
        }
}
 
Last edited:
I'm still testing with proxmox, clustering & drbd....
I didn't have the time yet to fully study which would be the best in this case.
I also didn't understand the rate-limit of 30M.
AFAIK there's nothing else going over this connection than my DRBD.
I assumed this ratelimit was used if normal data is going through that same NIC (could someone clarify?).
I changed it into 90M.
...
Hi,
the sync-rate is only for syncing - not for the normal replication-traffic. 90M makes no sense.
I assume the slow feeling came from drbd over 1GB line (and next single sata disks).
You can simply try that, if you switch on node drbd off " drbdadm down r0" - if your speed is than ok, it's depends only to drbd.

And it's not only the limit of 1GB - also the latency play a role.
Faster setups are possible with bonding/10GB-Ethernet/Infiniband or Dolphin-NICs.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!