I'm still testing with proxmox, clustering & drbd.
I built 2 machines with each 8 GB of RAM a Core i5 Sandy bridge system with an extra PCI Express Intel Gigabit NIC.
The bootdisks are 10.000 RPM raptors and for the DRBD cluster I used Seagate Barracuda 7200rpm 1 TB disks. I was planning to replace the boot disks for 45 GB SSD disks.
I'm using the Intel NIC with a short direct connection for syncing the DRBD-cluster.
I followed this wiki for my DRBD-setup.
http://pve.proxmox.com/wiki/DRBD
I didn't have the time yet to fully study which would be the best in this case.
I also didn't understand the rate-limit of 30M.
AFAIK there's nothing else going over this connection than my DRBD.
I assumed this ratelimit was used if normal data is going through that same NIC (could someone clarify?).
I changed it into 90M.
I have installed Microsoft SBS 2011 as the only client assigning it 6GB of RAM and 140 GB harddisk.
My problem now is that I'm a bit disappointed with the performance of this setup.
Before I was running SBS 2008 on a 1st generation i5 with 4 GB without any virtualization and it was performing much and much faster than this setup.
Is this to be expected and what is probably the bottleneck?
I would really like to have some tips to improve the performance without really making it into a much more expensive setup.
My main reason for virtualization is the hardware abstraction which makes it easier to migrate the system and the RAID-1 (DRBD) using normal SATA-disks.
I built 2 machines with each 8 GB of RAM a Core i5 Sandy bridge system with an extra PCI Express Intel Gigabit NIC.
The bootdisks are 10.000 RPM raptors and for the DRBD cluster I used Seagate Barracuda 7200rpm 1 TB disks. I was planning to replace the boot disks for 45 GB SSD disks.
I'm using the Intel NIC with a short direct connection for syncing the DRBD-cluster.
I followed this wiki for my DRBD-setup.
http://pve.proxmox.com/wiki/DRBD
I didn't have the time yet to fully study which would be the best in this case.
I also didn't understand the rate-limit of 30M.
AFAIK there's nothing else going over this connection than my DRBD.
I assumed this ratelimit was used if normal data is going through that same NIC (could someone clarify?).
I changed it into 90M.
I have installed Microsoft SBS 2011 as the only client assigning it 6GB of RAM and 140 GB harddisk.
My problem now is that I'm a bit disappointed with the performance of this setup.
Before I was running SBS 2008 on a 1st generation i5 with 4 GB without any virtualization and it was performing much and much faster than this setup.
Is this to be expected and what is probably the bottleneck?
I would really like to have some tips to improve the performance without really making it into a much more expensive setup.
My main reason for virtualization is the hardware abstraction which makes it easier to migrate the system and the RAID-1 (DRBD) using normal SATA-disks.