Installguide proxmox on Software Raid (Hetzner EQ6 server)

Sure vmbr0 eth0 and have the same ip. The machine is running
 
Hi Ellen,

Thank you for your work.

I did some 'unsupported' debian software-raid installations also. On my Hetzner EQ-4 and some physical accessible servers everything looked quite well, before I got load on the machines. I checked the machines with pveperf and received very low values (p { margin-bottom: 0.21cm; }FSYNCS/SECOND: 118.54)

After looking around I read they should be about 10-times higher. I fetched a machine at home (Athlon64X2-3200) and did a unsupported Debian-Installation of Proxmox (http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny ) and ran pveperf.

BufferdReads 6,77MB/s
FSYNCS/SECOND: 103,15

The I did a plain CD-installation with proxmox-VE 1.6 (I think VE 1.5 gives the same result) and got about 10 times higher values.

BufferdReads 56,65MB/s
FSYNCS/SECOND: 890,32

Would you please have a look at your machine and post your values.

Thanks for Your efforts,
 
post the output of pveversion -v (of your hetzner server). I think I know the solution.
 
Hi tom,

Thanks for your reply. I went in my lab this day and worked it out. I am tired like a dog and so I am writing. Sorry about that.

First I thougt it might be an error on my partition-scheme, but have a look at yourself:

My "first" testsetup (already made it more than one time):
pveversion -v

pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.32-3-pve
proxmox-ve-2.6.32: 1.6-14
pve-kernel-2.6.32-3-pve: 2.6.32-14
qemu-server: 1.1-18
pve-firmware: 1.0-7
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1
ksm-control-daemon: 1.0-4

p { margin-bottom: 0.21cm; } p { margin-bottom: 0.21cm; pveperf CPU BOGOMIPS: 42700.32
REGEX/SECOND: 1078674
HD SIZE: 9.92 GB (/dev/mapper/pve-root)
BUFFERED READS: 105.12 MB/sec
AVERAGE SEEK TIME: 10.21 ms
FSYNCS/SECOND: 122.92
DNS EXT: 67.32 ms







Then I took nearly exactly the partition-scheme like in the howto of Ellen.


p { margin-bottom: 0.21cm; } pveversion -v
pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.32-3-pve
proxmox-ve-2.6.32: 1.6-14
pve-kernel-2.6.32-3-pve: 2.6.32-14
qemu-server: 1.1-18
pve-firmware: 1.0-7
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1
ksm-control-daemon: 1.0-4


p { margin-bottom: 0.21cm; } pveperf
CPU BOGOMIPS: 42771.22
REGEX/SECOND: 1085225
HD SIZE: 99.21 GB (/dev/mapper/vg0-root)
BUFFERED READS: 103.49 MB/sec
AVERAGE SEEK TIME: 13.59 ms
FSYNCS/SECOND: 122.15
DNS EXT: 481.98 ms





At least made a setup without RAID an you will see:


p { margin-bottom: 0.21cm; } veversion -v
pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.32-3-pve
proxmox-ve-2.6.32: 1.6-14
pve-kernel-2.6.32-3-pve: 2.6.32-14
qemu-server: 1.1-18
pve-firmware: 1.0-7
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1
ksm-control-daemon: 1.0-4


p { margin-bottom: 0.21cm; } pveperf
CPU BOGOMIPS: 42575.61
REGEX/SECOND: 1080257
HD SIZE: 99.21 GB (/dev/mapper/vg0-root)
BUFFERED READS: 108.07 MB/sec
AVERAGE SEEK TIME: 14.41 ms
FSYNCS/SECOND: 1254.22
DNS EXT: 159.02 ms






I also verified this with an locally acessible box in my lab. The output of pveversion -v is always the same (like above), but the pveperf differs in the same way like the Hetzner-EQ4:




Installation from VE-1.6-iso:


p { margin-bottom: 0.21cm; } CPU BOGOMIPS: 24471.13
REGEX/SECOND: 1069240
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 124.64 MB/sec
AVERAGE SEEK TIME: 10.05 ms
FSYNCS/SECOND: 911.74
DNS EXT: 93.12 ms
DNS INT: 69.42 ms





Installation with debian 5.0.6-amd64 an manual proxmox-VE setup:


p { margin-bottom: 0.21cm; } CPU BOGOMIPS: 24471.38
REGEX/SECOND: 1101503
HD SIZE: 91.67 GB (/dev/mapper/vg0-root)
BUFFERED READS: 122.80 MB/sec
AVERAGE SEEK TIME: 9.69 ms
FSYNCS/SECOND: 929.57
DNS EXT: 111.80 ms
DNS INT: 98.13 ms



And finaly md-RAID:


p { margin-bottom: 0.21cm; } CPU BOGOMIPS: 24472.27
REGEX/SECOND: 1025920
HD SIZE: 91.67 GB (/dev/mapper/proxmox--i3--20100914-root)
BUFFERED READS: 117.66 MB/sec
AVERAGE SEEK TIME: 9.51 ms
FSYNCS/SECOND: 117.12
DNS EXT: 113.09 ms
DNS INT: 77.30 ms




I know, I have read the threads about Proxmox-VE 1.4 and software-RAID and I never wanted to rob time from the staff-members about this issue, and so I thought I talk to this thread.


And now you asked about my versions an so I answered.


I hope you can help me and probably many others.




Thanks a lot for your great work,


Jürgen
 
OK! I got it!

Switching to Kernel 2.6.24 solved the Problem (verified with box at home).

Hetzner-EQ4:

pveversion -v
pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.24-12-pve
proxmox-ve-2.6.24: 1.5-24
pve-kernel-2.6.24-12-pve: 2.6.24-24
qemu-server: 1.1-18
pve-firmware: 1.0-7
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1

pveperf
CPU BOGOMIPS: 44910.21
REGEX/SECOND: 698953
HD SIZE: 49.61 GB (/dev/mapper/vg0-root)
BUFFERED READS: 100.87 MB/sec
AVERAGE SEEK TIME: 11.96 ms
FSYNCS/SECOND: 1090.16
DNS EXT: 57.89 ms


Home Box:

p { margin-bottom: 0.21cm; } pveversion -v
pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.24-12-pve
proxmox-ve-2.6.24: 1.5-24
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.24-12-pve: 2.6.24-24
qemu-server: 1.1-18
pve-firmware: 1.0-7
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1

p { margin-bottom: 0.21cm; } pveperf
CPU BOGOMIPS: 24472.15
REGEX/SECOND: 694444
HD SIZE: 91.67 GB (/dev/mapper/proxmox--i3--20100914-root)
BUFFERED READS: 121.94 MB/sec
AVERAGE SEEK TIME: 9.72 ms
FSYNCS/SECOND: 857.84
DNS EXT: 85.88 ms
DNS INT: 78.12 ms


On installation on Hetzner-box dont`forget to correct your /boot/grub/menu.lst and remove /etc/udev/rules.d/70-persistent-net.rules before rebooting. Otherwise you have to visit your rescue-system again.

Felt like playing ego-shooters 15 years ago:
Always running against the same big monster and have to startover again and again - until find a new way over some cranes on the wall and then get, äahm, kill the monster from behind *g*.

Now let´s get virtual again - Hope it helps someone,

Jürgen
 
Success.

It seems to be working now.
Here is the config.

Host
Code:
# network interface settings
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address  88.198.57.68
        netmask  255.255.255.255
        gateway  88.198.57.65
        broadcast  88.198.57.95
        pointopoint 88.198.57.65

auto vmbr0
iface vmbr0 inet static
        address  46.4.192.225
        netmask  255.255.255.224
        gateway  88.198.57.68
        bridge_ports none
        bridge_stp off
        bridge_fd 0

auto vmbr1
iface vmbr1 inet static
        address  46.4.192.253
        netmask  255.255.255.224
        gateway  88.198.57.68
        bridge_ports none
        bridge_stp off
        bridge_fd 0
KVM guest
Code:
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static
        address 46.4.192.229
        netmask 255.255.255.255
        broadcast 88.198.57.95
        gateway 88.198.57.68
        pointopoint 88.198.57.68
followed skoop's post
 
hi
i have one performance problem for the ssh on guest

i have une /27 ripe block
here the configuration

# network interface settings
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 188.40.81.214
netmask 255.255.255.255
gateway 188.40.81.193
pointopoint 188.40.81.193

auto vmbr0
iface vmbr0 inet static
address 188.40.209.222
netmask 255.255.255.224
# gateway 188.40.81.214
bridge_ports none
bridge_maxwait 0
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
bridge_ports dummy0
bridge_maxwait 0
bridge_stp off
bridge_fd 0


i have create for example in the vmbr0 bridge the guest with the first ip of the ripe subnet
188.40.209.193
the machin run and is possible to routing in the public network
but when i connect in this vmachine in ssh
is very slow....

i have installed the web server and the response is very very slow...

for an other test i have create un other guest in nat in the firt guest with the private ip and firewall
is same slow....

sorry for my english
thanks
 
Last edited:
Thank you very much for this document.
I do above setting on my hetzner server, but i cant connect guest machine to the internet.
How can help me for my server configuration and how much please?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!