1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Intel Modular Server System MFSYS25 Installation Experiences

Discussion in 'Proxmox VE 1.x: Installation and configuration' started by Petrus4, Feb 21, 2010.

  1. Petrus4

    Petrus4 Member

    Joined:
    Feb 18, 2009
    Messages:
    249
    Likes Received:
    0
    I am starting a new thread to document installing proxmox VE on an Intel Modular Server System MFSYS25. The specs of the server can be found here

    So Far I tried to install with the latest proxmox-ve_1.5-4627-3.iso but this failed. Here is the debug screen which will hopefully help track down why it failed: debug_screen..jpg

    Thanks to Mike (mjoconr) who has already been testing the MFSYS25 system, I got the idea to use the proxmox-ve_1.4-4390.iso this worked with no problems. I then updated 1.4 to 1.5.

    Here the uname -a and pveversion -v info:

    Code:
    prox-flex1:~# uname -a
    Linux prox-flex1 2.6.24-8-pve #1 SMP PREEMPT Fri Oct 16 11:17:55 CEST 2009 x86_64 GNU/Linux
    prox-flex1:~# pveversion -v
    pve-manager: 1.5-7 (pve-manager/1.5/4660)
    running kernel: 2.6.24-8-pve
    pve-kernel-2.6.24-8-pve: 2.6.24-16
    qemu-server: 1.1-11
    pve-firmware: 1.0-3
    libpve-storage-perl: 1.0-10
    vncterm: 0.9-2
    vzctl: 3.0.23-1pve8
    vzdump: 1.2-5
    vzprocps: 2.0.11-1dso2
    vzquota: 3.0.11-1
    
    I have also attached to dmesg log for those of of you who may be interested.

    I found two sections that seem to contain errors in the dmesg. I will research these, but if anyone knows a quick fix please post here. :D

    Code:
    sd 0:0:1:0: [sdb] Device not ready: Sense Key : Not Ready [current]
    sd 0:0:1:0: [sdb] Device not ready: Add. Sense: Logical unit not ready, manual intervention required
    end_request: I/O error, dev sdb, sector 0
    sd 0:0:1:0: [sdb] Device not ready: Sense Key : Not Ready [current]
    sd 0:0:1:0: [sdb] Device not ready: Add. Sense: Logical unit not ready, manual intervention required
    end_request: I/O error, dev sdb, sector 0
    sd 0:0:1:0: [sdb] Device not ready: Sense Key : Not Ready [current]
    sd 0:0:1:0: [sdb] Device not ready: Add. Sense: Logical unit not ready, manual intervention required
    end_request: I/O error, dev sdb, sector 0
    sd 0:0:1:0: [sdb] Device not ready: Sense Key : Not Ready [current]
    sd 0:0:1:0: [sdb] Device not ready: Add. Sense: Logical unit not ready, manual intervention required
    end_request: I/O error, dev sdb, sector 0
    Monitor-Mwait will be used to enter C-1 state
    Monitor-Mwait will be used to enter C-3 state
    ACPI: CPU0 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU0] (supports 8 throttling states)
    ACPI: CPU1 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU1] (supports 8 throttling states)
    ACPI: CPU2 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU2] (supports 8 throttling states)
    ACPI: CPU3 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU3] (supports 8 throttling states)
    ACPI: CPU4 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU4] (supports 8 throttling states)
    ACPI: CPU5 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU5] (supports 8 throttling states)
    ACPI: CPU6 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU6] (supports 8 throttling states)
    ACPI: CPU7 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU7] (supports 8 throttling states)
    ACPI: CPU8 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU8] (supports 8 throttling states)
    ACPI: CPU9 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPU9] (supports 8 throttling states)
    ACPI: CPU10 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPUA] (supports 8 throttling states)
    ACPI: CPU11 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPUB] (supports 8 throttling states)
    ACPI: CPU12 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPUC] (supports 8 throttling states)
    ACPI: CPU13 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPUD] (supports 8 throttling states)
    ACPI: CPU14 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPUE] (supports 8 throttling states)
    ACPI: CPU15 (power states: C1[C1] C2[C3])
    ACPI: Processor [CPUF] (supports 8 throttling states)
    ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
    ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
    ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
    ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
    ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
    ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
    ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
    ACPI Exception (processor_core-0824): AE_NOT_FOUND, Processor Device is not present [20070126]
    
    Ok thats all for now. Hope to post back with more findings soon.
     

    Attached Files:

  2. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,103
    Likes Received:
    55
    hopefully I can join soon in sharing experience here, still waiting for the test server (delivery is scheduled end of this month).

    just a quick note on your pveversion -v results:

    you did not installed one of the new virtual packages:

    Code:
    apt-get install proxmox-ve-2.6.24
    or
    Code:
    apt-get install proxmox-ve-2.6.18
     
  3. Petrus4

    Petrus4 Member

    Joined:
    Feb 18, 2009
    Messages:
    249
    Likes Received:
    0

    Yes I look forward to you guys getting a flex server also and gettting ProxMox to work optimally on this system.

    I installed the new virt. package : proxmox-ve-2.6.24

    Do you know what the issue is with the 1.5 iso, why it is not installing?
     
  4. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,103
    Likes Received:
    55
    not yet but I will test soon.
     
  5. Petrus4

    Petrus4 Member

    Joined:
    Feb 18, 2009
    Messages:
    249
    Likes Received:
    0
    Hi Tom,

    I see you have been testing the flex server and have added install info on the wiki! Congratulations on finally getting the server!

    I have been testing also with good results! Currently I have one storage pool for both the VE installs as well as the storage for kvms. I am using 8 disks in the storage pool. This is probably not the best setup, but this is currently just a test.

    Did you ever get the 1.5 iso to work on this server? or did you also have to go with the 1.4 iso and then upgrade?

    I am using the 2.6.24 kernel but I see you installed with the 2.6.18 are there any advantages with the the 2.6.18 install?

    Also would you mind posting some of our performance tests here so we can compare?

    Intel had recommended redundant storage controllers.. for redundancy. I heard that if you storage controller goes down it could take days to rebuild a new one. Do you have two storage controllers on your system? If you do, how did you configure the VE to make use of this redundancy?
     
    #5 Petrus4, Mar 30, 2010
    Last edited: Mar 30, 2010
  6. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,103
    Likes Received:
    55
    hi,

    we installed the 1.5 from iso without any troubles (1.5 iso uses 2.6.18). we have only one storage controller.
    if you need more redundancy, think of getting a second IMS, cheaper than putting all redundant parts in one IMS (just add prices of all spare parts and compare) - and you are more flexible but you also need double rack space, the only downside here - depends on your server/hosting environment.

    which performance number do you want to compare?
     
  7. Petrus4

    Petrus4 Member

    Joined:
    Feb 18, 2009
    Messages:
    249
    Likes Received:
    0
    Perhaps you used a newer version of the 1.5 iso? I could not get it to work at all.

    What is the advantage of the 2.6.18 over 2.6.24?



    Our reseller recommended a second storage controller after Intel brought out an advisory that all new and high availability systems should have this. The price +/- 1800 US is still much less than a whole new separate server even if you add a second switch.

    Which spare parts are you adding up that it would equal the price of a new modular server?

    I still need to figure out if I need to configure anything extra in the Proxmox-VE to allow failover to the redundant controller in case of storage controller failure. Any ideas?


    Tom could you recommend a series of tests? I have only run
    Also do you have any recommendations for utilities to test within a windows environment I am trying to test to see if the virtio storage drivers are better than the IDE ones and also comparing virtio nic with e1000 nic.

    It would be great to have something to compare to, to see if I have configured everything correctly.

    Thanks!
     
  8. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    11,103
    Likes Received:
    55
    I used the current stable ISO. 2.6.18 is the only stable openvz kernel and has quite good driver support (more or less RHEL 5.4 based) the development of openvz for 2.6.24 is frozen by the openvz team.

    not really. a basic box includes the backplane for the disks, a management module, a switch, a storage controller and the power supplies. if you buy all this separately its more expensive. keep in mind, if you have a second storage controller you still have one back plane, one management card.
    so a second box is always the better solutions here to eliminate single point of failures.

    we have no second storage controller here, so we did not test this here but I assume it will not work out of the box. If you use a second IMS, you can think of replicating the storage via DRBD (only for KVM VM´s) - there is currently only 1 GBIT network in IMS, upcoming version of IMS will have 10gbit.

    nothing IMS specific, just follow the basic recommendations .
     
  9. Petrus4

    Petrus4 Member

    Joined:
    Feb 18, 2009
    Messages:
    249
    Likes Received:
    0
    Hi Tom,

    I just got some more time to work on the modular server and did a complete re-installation.

    What I discovered was this: the problem with the 1.5 iso not installing was due to having two storage controllers. If the installation LUN was set to use storage controller 2 the 1.5 iso would fail, when I switched to storage controller 1 the installation would succeed. When I used the the 1.4 iso it installed with either storage controller 1 or 2 (although I have not recently confirmed this). It may be that the 2.6.24 kernel supports dual storage controllers and the 2.6.18 does not??

    Another side effect of having two storage controllers is that the following shows up in dmesg as well as the system logs: (see below) this I believe is due to the second storage controller and the fact that multipath is not configured on the machine. (NOTE: this server module has three disks assigned to it: sda, sdb, and sdc, so sdd, sde and sdf must be the same disks via the second storage controller. )

     
  10. drd

    drd Member

    Joined:
    Apr 6, 2010
    Messages:
    31
    Likes Received:
    0
    Indeed, this means that "something" is trying to access the disks through the actual devices sdx. These messages are greatly reduced when multipath is installed and used.
     
  11. Petrus4

    Petrus4 Member

    Joined:
    Feb 18, 2009
    Messages:
    249
    Likes Received:
    0
    DRD do you have any experience configuring multipath on this server on ProxMoX-VE ? If yes are you willing to share your experiences?
     
  12. drd

    drd Member

    Joined:
    Apr 6, 2010
    Messages:
    31
    Likes Received:
    0
    I had the chance to try out a demo system (I no longer have access to it, and also don't have a production system yet). I posted my experiences in these two posts from end April:

    * http://forum.proxmox.com/threads/27...ached-shared-storage-system?p=21265#post21265
    * http://forum.proxmox.com/threads/27...ached-shared-storage-system?p=21310#post21310

    In short:

    I used the procedure on Intel's support website for SLES 11 (http://download.intel.com/support/motherboards/server/sb/sles11_mpio_setup_bkm_v2.pdf) as a basis, as well as their MPIO configuration file sample for SUSE 11.

    One difference: instead of installing the multipath tools from RPM, I did:

    Code:
    apt-get install multipath-tools multipath-tools-boot
    As part of this it tried to do update-initramfs, which failed with the suggestion to run it with the -t option. I manually ran

    Code:
    update-initramfs -c -t -k 2.6.18-2-pve
    In /boot/grub/device.map I put (depending on the name you chose, here "system"):

    Code:
    (hd0)   /dev/mapper/system
    In /etc/lvm/lvm.conf, I changed:

    Code:
    filter = [ "a|/dev/disk/by-id/.*|", "r|.*|" ]
    types = [ "device-mapper", 1 ]
    preferred_names = [ "^/dev/mpath/", "^/dev/mapper/", "^/dev/disk/by-id/", "^/dev/[hs]d", "^/dev/dm" ]
    I also changed the /etc/fstab to use the /dev/mapper entries instead of /dev/sda. Used the names with the ID's in them, not the human-readable names.

    Grub and LVM turned out to correctly use the multipath devices after reboot.
     
  13. bittner

    bittner New Member

    Joined:
    Jul 28, 2010
    Messages:
    7
    Likes Received:
    0
    A quick question:
    How do you intend that we set the device name? Via the multipath section in the /etc/multipath.conf configuration file, like this?

    Code:
    multipaths {
            multipath {
                    wwid            222ef0001555ab385
                    alias           system
            }
    }
    Will this work for booting, or do you set the device name somewhere else?
     
  14. drd

    drd Member

    Joined:
    Apr 6, 2010
    Messages:
    31
    Likes Received:
    0
    That's were I set it, and that apparently is sufficient for booting.
     
  15. bittner

    bittner New Member

    Joined:
    Jul 28, 2010
    Messages:
    7
    Likes Received:
    0
    Great, that worked out for me too for booting. During the boot process both device aliases are created.

    I had to use the /dev/mapper/system-part1 alias in GRUB's menu.lst though to specify the root file system with the kernel parameter (and the /dev/mapper/database-part1 alias in /etc/fstab, analogously) to have the system boot successfully. When using the /dev/disk/by-id/... symlink instead with GRUB (as proposed in Intel's setup instructions for SLES and RHEL) the boot process failed as soon as Debian tried to mount the system partition:

    This happens because the device-mapper starts beforehand, then 3 devices exists that point to the physical device having this ID, and Debian obviously tries to mount the first it finds, which is the "wrong" one and cannot be claimed again.

    Thanks again for your help!
     
  16. drd

    drd Member

    Joined:
    Apr 6, 2010
    Messages:
    31
    Likes Received:
    0
    And thank you for your feedback and your own observations.
     
  17. bittner

    bittner New Member

    Joined:
    Jul 28, 2010
    Messages:
    7
    Likes Received:
    0
  18. Petrus4

    Petrus4 Member

    Joined:
    Feb 18, 2009
    Messages:
    249
    Likes Received:
    0
    #18 Petrus4, Aug 16, 2010
    Last edited: Aug 16, 2010
  19. gcakici

    gcakici New Member

    Joined:
    Sep 26, 2009
    Messages:
    15
    Likes Received:
    0
    Hi everybody, we're also testing a modular server with the following specs ;

    One storage controller
    One computing module with a Xeon 5620 cpu
    2x300Gb Seagate Savvio 10K RPM Sas drive with Raid-1

    My pveperf result is attached below ;

    CPU BOGOMIPS: 38308.29
    REGEX/SECOND: 871689
    HD SIZE: 23.62 GB (/dev/pve/root)
    BUFFERED READS: 94.08 MB/sec
    AVERAGE SEEK TIME: 5.68 ms
    FSYNCS/SECOND: 761.66
    DNS EXT: 14.89 ms
    DNS INT: 4.41 ms

    Could you please share your results with your configurations?

    Regards
    Gokalp
     
  20. Petrus4

    Petrus4 Member

    Joined:
    Feb 18, 2009
    Messages:
    249
    Likes Received:
    0
    Hi,

    I have been trying to follow your instructions on Multipath on Debian Lenny and Intel Modular Server. and accidentally rebooted before I finished the configuration. I am unable to boot up the system. I hoping someone can help me fix this problem. I have tried

    Right now I get these errors when booting:

    /dev/pve/data: clean <--- not an error

    fsck.ext3: Device or Resource busy while trying to open /dev/sda1

    fsck.ext3: Device or Resource busy while trying to open /dev/sdc1



    Here is my /boot/grub/menu.lst
    Code:
    
    root    (hd0,0)
    kernel /vmlinuz-2.6.32.7-pve root=/dev/mapper/pve-root ro
    initrd    /initrd.img-2.6.32-7-amd64
    
    Here is my /etc/fstab (I had to type all info so I skipped the UIID's)


    Code:
    
    /dev/pve/root / ext3 errors=remount-ro 0 1
    /dev/pve/data /var/lib/vz ext3 defaults 0 1
    UUID="number string" /boot ext3 defaults 0 1
    /dev/pve/swap none swap sw 0 0
    proc /proc proc defaults 0 0
    UUID="other number string" /srv ext3 defaults 0 2
    




     

Share This Page