Hello Guys,
Sry for the late response. Much work with business last days.
I think i am on the way to solve the problem
I think it is the port Bonding i added at proxmox host 1 which is the problem. I configured the network like
http://pve.proxmox.com/wiki/Vlans but THIS IS NOT THE BEST CASE!
I have now done the following testing:
Configured Proxmox Host 2 with the following configuration:
# Netzwerkkartenauto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
auto eth3
iface eth3 inet manual
# V-LAN 165
auto eth3.165
iface eth3.165 inet manual
# V-LAN 240
auto eth3.240
iface eth3.240 inet manual
# Admin Interface Proxmox
auto vmbr0
iface vmbr0 inet static
address <IP>
netmask 255.255.255.0
gateway <Gateway>
bridge_ports eth3.165
bridge_stp off
bridge_fd 0
auto vmbr240
iface vmbr240 inet static
address 10.0.2.41
netmask 255.255.255.252
bridge_ports eth3.240
bridge_stp off
bridge_fd 0
V-LAN 240 is one of our V-LANs. After this, i have installed a VM on Proxmox Host 2 at V-LAN 240 and installed Apache + a File with 10 G.
The Result:
Download at Host 1 directly at Proxmox Debian ->
31% [==========> ] 2,628,841,272 107M/s eta 43s
--- Do it baby! ---
Download at Host 1 at one Debian Test VM, same configuration linke Proxmox Host 2
3% [> ] 323.472.632 34,3M/s ETA 3m 24s
How Shocking. It seems that its not good
The network Configuration at Proxmox Host 1 is:
# network interface settings
auto bond0.165
iface bond0.165 inet manual
vlan-raw-device bond0
auto bond0.240
iface bond0.240 inet manual
vlan-raw-device bond0
auto bond0.704
iface bond0.704 inet manual
vlan-raw-device bond0
auto bond0.707
iface bond0.707 inet manual
vlan-raw-device bond0
auto bond0.726
iface bond0.726 inet manual
vlan-raw-device bond0
auto bond0.730
iface bond0.730 inet manual
vlan-raw-device bond0
auto bond0.900
iface bond0.900 inet manual
vlan-raw-device bond0
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
auto bond0
iface bond0 inet manual
slaves eth3
bond_miimon 100
bond_mode balance-tlb
#MGMT-Interface
auto vmbr0
iface vmbr0 inet static
address <IP>
netmask 255.255.255.0
gateway <Gateway>
# bridge_ports eth0
bridge_ports bond0.165
bridge_stp off
bridge_fd 0
#VM-Interfaces
auto vmbr240
iface vmbr240 inet static
address 10.0.2.40
netmask 255.255.255.252
bridge_ports bond0.240
bridge_stp off
bridge_fd 0
auto vmbr704
iface vmbr704 inet static
address 10.0.70.4
netmask 255.255.255.252
bridge_ports bond0.704
bridge_stp off
bridge_fd 0
i think, this configuration is the problem, so i will try the next days to change this configuration to the same configuration like host 2. This take some days because it is a productive System with active Customers on it