Intel Modular Server System MFSYS25 Installation Experiences

Petrus4

Member
Feb 18, 2009
249
0
16
I am starting a new thread to document installing proxmox VE on an Intel Modular Server System MFSYS25. The specs of the server can be found here

So Far I tried to install with the latest proxmox-ve_1.5-4627-3.iso but this failed. Here is the debug screen which will hopefully help track down why it failed:debug_screen..jpg

Thanks to Mike (mjoconr) who has already been testing the MFSYS25 system, I got the idea to use the proxmox-ve_1.4-4390.iso this worked with no problems. I then updated 1.4 to 1.5.

Here the uname -a and pveversion -v info:

Code:
prox-flex1:~# uname -a
Linux prox-flex1 2.6.24-8-pve #1 SMP PREEMPT Fri Oct 16 11:17:55 CEST 2009 x86_64 GNU/Linux
prox-flex1:~# pveversion -v
pve-manager: 1.5-7 (pve-manager/1.5/4660)
running kernel: 2.6.24-8-pve
pve-kernel-2.6.24-8-pve: 2.6.24-16
qemu-server: 1.1-11
pve-firmware: 1.0-3
libpve-storage-perl: 1.0-10
vncterm: 0.9-2
vzctl: 3.0.23-1pve8
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
I have also attached to dmesg log for those of of you who may be interested.

I found two sections that seem to contain errors in the dmesg. I will research these, but if anyone knows a quick fix please post here. :D

Code:
sd 0:0:1:0: [sdb] Device not ready: Sense Key : Not Ready [current]
sd 0:0:1:0: [sdb] Device not ready: Add. Sense: Logical unit not ready, manual intervention required
end_request: I/O error, dev sdb, sector 0
sd 0:0:1:0: [sdb] Device not ready: Sense Key : Not Ready [current]
sd 0:0:1:0: [sdb] Device not ready: Add. Sense: Logical unit not ready, manual intervention required
end_request: I/O error, dev sdb, sector 0
sd 0:0:1:0: [sdb] Device not ready: Sense Key : Not Ready [current]
sd 0:0:1:0: [sdb] Device not ready: Add. Sense: Logical unit not ready, manual intervention required
end_request: I/O error, dev sdb, sector 0
sd 0:0:1:0: [sdb] Device not ready: Sense Key : Not Ready [current]
sd 0:0:1:0: [sdb] Device not ready: Add. Sense: Logical unit not ready, manual intervention required
end_request: I/O error, dev sdb, sector 0
Monitor-Mwait will be used to enter C-1 state
Monitor-Mwait will be used to enter C-3 state
ACPI: CPU0 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU0] (supports 8 throttling states)
ACPI: CPU1 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU1] (supports 8 throttling states)
ACPI: CPU2 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU2] (supports 8 throttling states)
ACPI: CPU3 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU3] (supports 8 throttling states)
ACPI: CPU4 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU4] (supports 8 throttling states)
ACPI: CPU5 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU5] (supports 8 throttling states)
ACPI: CPU6 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU6] (supports 8 throttling states)
ACPI: CPU7 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU7] (supports 8 throttling states)
ACPI: CPU8 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU8] (supports 8 throttling states)
ACPI: CPU9 (power states: C1[C1] C2[C3])
ACPI: Processor [CPU9] (supports 8 throttling states)
ACPI: CPU10 (power states: C1[C1] C2[C3])
ACPI: Processor [CPUA] (supports 8 throttling states)
ACPI: CPU11 (power states: C1[C1] C2[C3])
ACPI: Processor [CPUB] (supports 8 throttling states)
ACPI: CPU12 (power states: C1[C1] C2[C3])
ACPI: Processor [CPUC] (supports 8 throttling states)
ACPI: CPU13 (power states: C1[C1] C2[C3])
ACPI: Processor [CPUD] (supports 8 throttling states)
ACPI: CPU14 (power states: C1[C1] C2[C3])
ACPI: Processor [CPUE] (supports 8 throttling states)
ACPI: CPU15 (power states: C1[C1] C2[C3])
ACPI: Processor [CPUF] (supports 8 throttling states)
ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
Ok thats all for now. Hope to post back with more findings soon.
 

Attachments

  • dmesg..zip
    8.7 KB · Views: 1
hopefully I can join soon in sharing experience here, still waiting for the test server (delivery is scheduled end of this month).

just a quick note on your pveversion -v results:

you did not installed one of the new virtual packages:

Code:
apt-get install proxmox-ve-2.6.24
or
Code:
apt-get install proxmox-ve-2.6.18
 
hopefully I can join soon in sharing experience here, still waiting for the test server (delivery is scheduled end of this month).

just a quick note on your pveversion -v results:

you did not installed one of the new virtual packages:

Code:
apt-get aptitu
or
Code:
apt-get install proxmox-ve-2.6.18


Yes I look forward to you guys getting a flex server also and gettting ProxMox to work optimally on this system.

I installed the new virt. package : proxmox-ve-2.6.24

Do you know what the issue is with the 1.5 iso, why it is not installing?
 
not yet but I will test soon.
 
not yet but I will test soon.

Hi Tom,

I see you have been testing the flex server and have added install info on the wiki! Congratulations on finally getting the server!

I have been testing also with good results! Currently I have one storage pool for both the VE installs as well as the storage for kvms. I am using 8 disks in the storage pool. This is probably not the best setup, but this is currently just a test.

Did you ever get the 1.5 iso to work on this server? or did you also have to go with the 1.4 iso and then upgrade?

I am using the 2.6.24 kernel but I see you installed with the 2.6.18 are there any advantages with the the 2.6.18 install?

Also would you mind posting some of our performance tests here so we can compare?

Intel had recommended redundant storage controllers.. for redundancy. I heard that if you storage controller goes down it could take days to rebuild a new one. Do you have two storage controllers on your system? If you do, how did you configure the VE to make use of this redundancy?
 
Last edited:
hi,

we installed the 1.5 from iso without any troubles (1.5 iso uses 2.6.18). we have only one storage controller.
if you need more redundancy, think of getting a second IMS, cheaper than putting all redundant parts in one IMS (just add prices of all spare parts and compare) - and you are more flexible but you also need double rack space, the only downside here - depends on your server/hosting environment.

which performance number do you want to compare?
 
hi,

we installed the 1.5 from iso without any troubles (1.5 iso uses 2.6.18).

Perhaps you used a newer version of the 1.5 iso? I could not get it to work at all.

What is the advantage of the 2.6.18 over 2.6.24?


we have only one storage controller.


if you need more redundancy, think of getting a second IMS, cheaper than putting all redundant parts in one IMS (just add prices of all spare parts and compare) - and you are more flexible but you also need double rack space, the only downside here - depends on your server/hosting environment.


Our reseller recommended a second storage controller after Intel brought out an advisory that all new and high availability systems should have this. The price +/- 1800 US is still much less than a whole new separate server even if you add a second switch.

Which spare parts are you adding up that it would equal the price of a new modular server?

I still need to figure out if I need to configure anything extra in the Proxmox-VE to allow failover to the redundant controller in case of storage controller failure. Any ideas?


which performance number do you want to compare?

Tom could you recommend a series of tests? I have only run
pveperf /var/lib/vz

Also do you have any recommendations for utilities to test within a windows environment I am trying to test to see if the virtio storage drivers are better than the IDE ones and also comparing virtio nic with e1000 nic.

It would be great to have something to compare to, to see if I have configured everything correctly.

Thanks!
 
Perhaps you used a newer version of the 1.5 iso? I could not get it to work at all.

What is the advantage of the 2.6.18 over 2.6.24?

I used the current stable ISO. 2.6.18 is the only stable openvz kernel and has quite good driver support (more or less RHEL 5.4 based) the development of openvz for 2.6.24 is frozen by the openvz team.

Our reseller recommended a second storage controller after Intel brought out an advisory that all new and high availability systems should have this. The price +/- 1800 US is still much less than a whole new separate server even if you add a second switch.
not really. a basic box includes the backplane for the disks, a management module, a switch, a storage controller and the power supplies. if you buy all this separately its more expensive. keep in mind, if you have a second storage controller you still have one back plane, one management card.
so a second box is always the better solutions here to eliminate single point of failures.

Which spare parts are you adding up that it would equal the price of a new modular server?

I still need to figure out if I need to configure anything extra in the Proxmox-VE to allow failover to the redundant controller in case of storage controller failure. Any ideas?

we have no second storage controller here, so we did not test this here but I assume it will not work out of the box. If you use a second IMS, you can think of replicating the storage via DRBD (only for KVM VM´s) - there is currently only 1 GBIT network in IMS, upcoming version of IMS will have 10gbit.

Tom could you recommend a series of tests? I have only run

Also do you have any recommendations for utilities to test within a windows environment I am trying to test to see if the virtio storage drivers are better than the IDE ones and also comparing virtio nic with e1000 nic.

It would be great to have something to compare to, to see if I have configured everything correctly.

Thanks!

nothing IMS specific, just follow the basic recommendations .
 
hi,

we installed the 1.5 from iso without any troubles (1.5 iso uses 2.6.18). we have only one storage controller.
if you need more redundancy, think of getting a second IMS, cheaper than putting all redundant parts in one IMS (just add prices of all spare parts and compare) - and you are more flexible but you also need double rack space, the only downside here - depends on your server/hosting environment.

which performance number do you want to compare?

Hi Tom,

I just got some more time to work on the modular server and did a complete re-installation.

What I discovered was this: the problem with the 1.5 iso not installing was due to having two storage controllers. If the installation LUN was set to use storage controller 2 the 1.5 iso would fail, when I switched to storage controller 1 the installation would succeed. When I used the the 1.4 iso it installed with either storage controller 1 or 2 (although I have not recently confirmed this). It may be that the 2.6.24 kernel supports dual storage controllers and the 2.6.18 does not??

Another side effect of having two storage controllers is that the following shows up in dmesg as well as the system logs: (see below) this I believe is due to the second storage controller and the fact that multipath is not configured on the machine. (NOTE: this server module has three disks assigned to it: sda, sdb, and sdc, so sdd, sde and sdf must be the same disks via the second storage controller. )

sdd:end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
sde:end_request: I/O error, dev sde, sector 0
Buffer I/O error on device sde, logical block 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 0
sdf:end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sdd, sector 314572672
end_request: I/O error, dev sdd, sector 314572784
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sdd, sector 8
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 1167966720
end_request: I/O error, dev sde, sector 1167966848
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 8
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 1167966720
end_request: I/O error, dev sdf, sector 1167966848
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 8
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sdd, sector 314572672
end_request: I/O error, dev sdd, sector 314572784
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sdd, sector 8
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 1167966720
end_request: I/O error, dev sde, sector 1167966848
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sde, sector 8
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 1167966720
end_request: I/O error, dev sdf, sector 1167966848
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdf, sector 8
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
end_request: I/O error, dev sdf, sector 0
end_request: I/O error, dev sdd, sector 0
end_request: I/O error, dev sde, sector 0
 
Another side effect of having two storage controllers is that the following shows up in dmesg as well as the system logs: (see below) this I believe is due to the second storage controller and the fact that multipath is not configured on the machine. (NOTE: this server module has three disks assigned to it: sda, sdb, and sdc, so sdd, sde and sdf must be the same disks via the second storage controller. )

Indeed, this means that "something" is trying to access the disks through the actual devices sdx. These messages are greatly reduced when multipath is installed and used.
 
Indeed, this means that "something" is trying to access the disks through the actual devices sdx. These messages are greatly reduced when multipath is installed and used.

DRD do you have any experience configuring multipath on this server on ProxMoX-VE ? If yes are you willing to share your experiences?
 
DRD do you have any experience configuring multipath on this server on ProxMoX-VE ? If yes are you willing to share your experiences?
I had the chance to try out a demo system (I no longer have access to it, and also don't have a production system yet). I posted my experiences in these two posts from end April:

* http://forum.proxmox.com/threads/27...ached-shared-storage-system?p=21265#post21265
* http://forum.proxmox.com/threads/27...ached-shared-storage-system?p=21310#post21310

In short:

I used the procedure on Intel's support website for SLES 11 (http://download.intel.com/support/motherboards/server/sb/sles11_mpio_setup_bkm_v2.pdf) as a basis, as well as their MPIO configuration file sample for SUSE 11.

One difference: instead of installing the multipath tools from RPM, I did:

Code:
apt-get install multipath-tools multipath-tools-boot

As part of this it tried to do update-initramfs, which failed with the suggestion to run it with the -t option. I manually ran

Code:
update-initramfs -c -t -k 2.6.18-2-pve

In /boot/grub/device.map I put (depending on the name you chose, here "system"):

Code:
(hd0)   /dev/mapper/system

In /etc/lvm/lvm.conf, I changed:

Code:
filter = [ "a|/dev/disk/by-id/.*|", "r|.*|" ]
types = [ "device-mapper", 1 ]
preferred_names = [ "^/dev/mpath/", "^/dev/mapper/", "^/dev/disk/by-id/", "^/dev/[hs]d", "^/dev/dm" ]

I also changed the /etc/fstab to use the /dev/mapper entries instead of /dev/sda. Used the names with the ID's in them, not the human-readable names.

Grub and LVM turned out to correctly use the multipath devices after reboot.
 
A quick question:
In /boot/grub/device.map I put (depending on the name you chose, here "system"):
Code:
(hd0)   /dev/mapper/system

How do you intend that we set the device name? Via the multipath section in the /etc/multipath.conf configuration file, like this?

Code:
multipaths {
        multipath {
                wwid            222ef0001555ab385
                alias           system
        }
}
Will this work for booting, or do you set the device name somewhere else?
 
How do you intend that we set the device name? Via the multipath section in the /etc/multipath.conf configuration file, like this?
<snip>
Will this work for booting, or do you set the device name somewhere else?
That's were I set it, and that apparently is sufficient for booting.
 
That's were I set it, and that apparently is sufficient for booting.

Great, that worked out for me too for booting. During the boot process both device aliases are created.

I had to use the /dev/mapper/system-part1 alias in GRUB's menu.lst though to specify the root file system with the kernel parameter (and the /dev/mapper/database-part1 alias in /etc/fstab, analogously) to have the system boot successfully. When using the /dev/disk/by-id/... symlink instead with GRUB (as proposed in Intel's setup instructions for SLES and RHEL) the boot process failed as soon as Debian tried to mount the system partition:

mount: Mounting /dev/disk/by-id/... on / failed: Device or resource busy

This happens because the device-mapper starts beforehand, then 3 devices exists that point to the physical device having this ID, and Debian obviously tries to mount the first it finds, which is the "wrong" one and cannot be claimed again.

Thanks again for your help!
 
Hi everybody, we're also testing a modular server with the following specs ;

One storage controller
One computing module with a Xeon 5620 cpu
2x300Gb Seagate Savvio 10K RPM Sas drive with Raid-1

My pveperf result is attached below ;

CPU BOGOMIPS: 38308.29
REGEX/SECOND: 871689
HD SIZE: 23.62 GB (/dev/pve/root)
BUFFERED READS: 94.08 MB/sec
AVERAGE SEEK TIME: 5.68 ms
FSYNCS/SECOND: 761.66
DNS EXT: 14.89 ms
DNS INT: 4.41 ms

Could you please share your results with your configurations?

Regards
Gokalp
 
Hi,

I have been trying to follow your instructions on Multipath on Debian Lenny and Intel Modular Server. and accidentally rebooted before I finished the configuration. I am unable to boot up the system. I hoping someone can help me fix this problem. I have tried

Right now I get these errors when booting:

/dev/pve/data: clean <--- not an error

fsck.ext3: Device or Resource busy while trying to open /dev/sda1

fsck.ext3: Device or Resource busy while trying to open /dev/sdc1



Here is my /boot/grub/menu.lst
Code:
root    (hd0,0)
kernel /vmlinuz-2.6.32.7-pve root=/dev/mapper/pve-root ro
initrd    /initrd.img-2.6.32-7-amd64

Here is my /etc/fstab (I had to type all info so I skipped the UIID's)


Code:
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID="number string" /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID="other number string" /srv ext3 defaults 0 2




 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!