HP NIC with address space collision

Dec 19, 2012
519
16
83
Hi. We have:
Code:
pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-71
pve-firmware: 1.0-21
libpve-common-perl: 1.0-40
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-7
proxmox shows this in dmesg:
Code:
pci 0000:04:06.0: address space collision: [mem 0xfe600000-0xfe63ffff pref] conflicts with PCI Bus 0000:04 [mem 0xfe600000-0xfe6fffff]
We are using 3 NICs in our server (snippet):
Code:
lspci:
00:04.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RD890 PCI to PCI bridge (PCI express gpp port D)
04:06.0 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 01)
(HP NIC with two ports!)


More Info about our server:
Code:
proxmox ~ # /usr/StorMan/arcconf getconfig 1
   Controller Status                        : Optimal
   Channel description                      : SAS/SATA
   Controller Model                         : Adaptec 6405E
[...]

# cat /proc/cpuinfo
[...]
vendor_id       : AuthenticAMD
cpu family      : 21
model name    : AMD FX(tm)-8120 Eight-Core Processor
cpu MHz         : 3100.264
cache size      : 2048 KB

free:            total       used       free     shared    buffers     cached
Mem:      16400148    4482556   11917592          0      45496      37564
[...]

and

Code:
> pveperf: 
CPU BOGOMIPS:      49604.16
REGEX/SECOND:      1006857
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    308.82 MB/sec
AVERAGE SEEK TIME: 7.96 ms
FSYNCS/SECOND:     1699.47
DNS EXT:           58.91 ms
DNS INT:           95.25 ms

My first thought was: pci=nocrs in /etc/default/grub but it made no difference.
Besides the network in our VM is slow ... very slow! The VM-Server offers PXE-boot but not with high
performance :(
Any ideas what to do?
Thanks!