Hello,
I have 4 Proxmox 3.1 nodes running on 2 ZFS (on Linux) servers.
The ZFS on Linux servers have Debian 7 with latest kernel and updates. The ZFS servers have 4x1Gbit in a LAG LACP in 802.3ad bond mode. The 4 interfaces all work together, making a total of 4Gbit possible if the 4 nodes hit this server at the same time.
I also configured all 4 Proxmox 3.1 nodes with 2x1Gbit trunk (also 802.3ad bond mode). Somehow, incomming and outgoing traffic always goes to/from 1 interface. If I switch kernel on the proxmox node, I'm able to get better results.
What I want is that 2 data connections should be split on both network interfaces. So 2 copy commands from proxmox node 1 to ZFS01 and ZFS02 should open up 2 connections and make a total of 2Gbit, right? And even better, I would like to have 2 copy commands from proxmox node 1 to only ZFS01 also use 2 network interfaces, but I'm not sure if this is possible because it goes to the same destination. The LACP is running on Layer2 (default) and the switch is configured as Layer2 aswell.
Please let me now how I can improve the network IO on the Proxmox nodes. At this moment, 1 copy/stress/move/clone causes complete IO blackout for the node. Because of this problem, all running VM's on the proxmox node stop working.
Setup:
ZFS01 and ZFS02 4x1Gbit LACP/LAG 802.3ad mode layer2 3.2.0-4-amd64 kernel
nodess 2x1Gbit 2x1Gbit LACP/LAG 802.3ad mode layer2 2.6.32-26-pve kernel
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
I have 4 Proxmox 3.1 nodes running on 2 ZFS (on Linux) servers.
The ZFS on Linux servers have Debian 7 with latest kernel and updates. The ZFS servers have 4x1Gbit in a LAG LACP in 802.3ad bond mode. The 4 interfaces all work together, making a total of 4Gbit possible if the 4 nodes hit this server at the same time.
I also configured all 4 Proxmox 3.1 nodes with 2x1Gbit trunk (also 802.3ad bond mode). Somehow, incomming and outgoing traffic always goes to/from 1 interface. If I switch kernel on the proxmox node, I'm able to get better results.
What I want is that 2 data connections should be split on both network interfaces. So 2 copy commands from proxmox node 1 to ZFS01 and ZFS02 should open up 2 connections and make a total of 2Gbit, right? And even better, I would like to have 2 copy commands from proxmox node 1 to only ZFS01 also use 2 network interfaces, but I'm not sure if this is possible because it goes to the same destination. The LACP is running on Layer2 (default) and the switch is configured as Layer2 aswell.
Please let me now how I can improve the network IO on the Proxmox nodes. At this moment, 1 copy/stress/move/clone causes complete IO blackout for the node. Because of this problem, all running VM's on the proxmox node stop working.
Setup:
ZFS01 and ZFS02 4x1Gbit LACP/LAG 802.3ad mode layer2 3.2.0-4-amd64 kernel
nodess 2x1Gbit 2x1Gbit LACP/LAG 802.3ad mode layer2 2.6.32-26-pve kernel
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1