KVM 1.1 and new Kernel

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jul 25, 2012.

  1. martin

    martin Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    627
    Likes Received:
    297
    We just moved a bunch of new packages to our stable repository, including latest stable OpenVZ Kernel (042stab057.1), stable KVM 1.1.1, new cluster package, bug fixes and a lot of code cleanups.

    Additionally, we added first packages to support two distributed storage technologies - ceph (client) and sheepdog. Both technologies are looking great but note, currently it’s not yet ready for production use.

    Important Note for HA setups:
    the redhat-cluster-pve package provides new defaults and you need to accept the new one - answer with Y here. as soon as the installations is finished, you need to enable fencing again, see http://pve.proxmox.com/wiki/Fencing#Enable_fencing_on_all_nodes

    After installation and reboot (aptitude update && aptitude full-upgrade), your 'pveversion -v' should look like this:
    Code:
    pveversion -v
    
    
    pve-manager: 2.1-12 (pve-manager/2.1/be112d89)
    running kernel: 2.6.32-13-pve
    proxmox-ve-2.6.32: 2.1-72
    pve-kernel-2.6.32-13-pve: 2.6.32-72
    lvm2: 2.02.95-1pve2
    clvm: 2.02.95-1pve2
    corosync-pve: 1.4.3-1
    openais-pve: 1.1.4-2
    libqb: 0.10.1-2
    redhat-cluster-pve: 3.1.92-2
    resource-agents-pve: 3.9.2-3
    fence-agents-pve: 3.1.8-1
    pve-cluster: 1.0-27
    qemu-server: 2.0-45
    pve-firmware: 1.0-17
    libpve-common-perl: 1.0-28
    libpve-access-control: 1.0-24
    libpve-storage-perl: 2.0-26
    vncterm: 1.0-2
    vzctl: 3.0.30-2pve5
    vzprocps: 2.0.11-2
    vzquota: 3.0.12-3
    pve-qemu-kvm: 1.1-6
    ksm-control-daemon: 1.1-1
    
     
  2. bd5hty

    bd5hty Member

    Joined:
    Dec 27, 2011
    Messages:
    99
    Likes Received:
    0
    Great!Thanked the proxmox team hard work!
     
  3. Chris Rivera

    Chris Rivera Member

    Joined:
    Jul 25, 2012
    Messages:
    197
    Likes Received:
    0
  4. SectorNine50

    SectorNine50 New Member

    Joined:
    Jul 13, 2012
    Messages:
    9
    Likes Received:
    0
    As a heads up to those with Windows 2003 VM's, it appears that the update causes Windows Activation to think that the "hardware changed significantly." I had to call Microsoft to get the activation key; not a big deal, but good to know before hand.
     
  5. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,338
    Likes Received:
    375
    only if you come from a quite old KVM - should not happen if you upgrade from 1.0.

    what cpu type do you use for your win guest? Maybe you should choose one which is not changing by every upgrade of KVM.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. m.ardito

    m.ardito Active Member

    Joined:
    Feb 17, 2010
    Messages:
    1,473
    Likes Received:
    12
    @martin there is just a copy/paste orphan in the news page, a "Proxmox VE 2.0 final release!" should be removed...

    @tom <<Maybe you should choose one which is not changing by every upgrade of KVM.>> which ones or, which are likely to change with kvm?

    Marco
     
  7. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,338
    Likes Received:
    375
    which news page?

    e.g. 'host'
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. m.ardito

    m.ardito Active Member

    Joined:
    Feb 17, 2010
    Messages:
    1,473
    Likes Received:
    12
  9. rahman

    rahman Member

    Joined:
    Nov 1, 2010
    Messages:
    58
    Likes Received:
    0
    Hi,

    Latest update seems to broke our cluster setup. On all nodes cman stops working. Manually tried to start it on all nodes:

    root@kvm45:~# /etc/init.d/cman start
    Starting cluster:
    Checking if cluster has been disabled at boot... [ OK ]
    Checking Network Manager... [ OK ]
    Global setup... [ OK ]
    Loading kernel modules... [ OK ]
    Mounting configfs... [ OK ]
    Starting cman... [ OK ]
    Waiting for quorum... [ OK ]
    Starting fenced... [ OK ]
    Starting dlm_controld... [ OK ]
    Unfencing self... [ OK ]

    But they stop in a few seconds as you see.

    root@kvm45:~# pveversion -v
    pve-manager: 2.1-12 (pve-manager/2.1/be112d89)
    running kernel: 2.6.32-13-pve
    proxmox-ve-2.6.32: 2.1-72
    pve-kernel-2.6.32-11-pve: 2.6.32-66
    pve-kernel-2.6.32-13-pve: 2.6.32-72
    lvm2: 2.02.95-1pve2
    clvm: 2.02.95-1pve2
    corosync-pve: 1.4.3-1
    openais-pve: 1.1.4-2
    libqb: 0.10.1-2
    redhat-cluster-pve: 3.1.92-2
    resource-agents-pve: 3.9.2-3
    fence-agents-pve: 3.1.8-1
    pve-cluster: 1.0-27
    qemu-server: 2.0-45
    pve-firmware: 1.0-17
    libpve-common-perl: 1.0-28
    libpve-access-control: 1.0-24
    libpve-storage-perl: 2.0-27
    vncterm: 1.0-2
    vzctl: 3.0.30-2pve5
    vzprocps: 2.0.11-2
    vzquota: 3.0.12-3
    pve-qemu-kvm: 1.1-6
    ksm-control-daemon: 1.1-1


    Edit: I can open webadmins of the nodes one by one and start the VMs via each nodes webadmins. But on each node webadmin, the other nodes seem offline.

    Edit2: I get these on nodes:
    Jul 26 13:17:19 corosync [CMAN ] Activity suspended on this node
    Jul 26 13:17:19 corosync [CMAN ] Error reloading the configuration, will retry every second
    Jul 26 13:17:20 corosync [CMAN ] Unable to load new config in corosync: New configuration version has to be newer than current running configuration


    Jul 26 13:17:20 corosync [CMAN ] Can't get updated config version 6: New configuration version has to be newer than current running configuration
    .
    Jul 26 13:17:20 corosync [CMAN ] Activity suspended on this node
    Jul 26 13:17:20 corosync [CMAN ] Error reloading the configuration, will retry every second
    Jul 26 13:17:21 corosync [CMAN ] Unable to load new config in corosync: New configuration version has to be newer than current running configuration


    Jul 26 13:17:21 corosync [CMAN ] Can't get updated config version 6: New configuration version has to be newer than current running configuration
    .
    Jul 26 13:17:21 corosync [CMAN ] Activity suspended on this node
    Jul 26 13:17:21 corosync [CMAN ] Error reloading the configuration, will retry every second



    How can I fix this?
     
    #9 rahman, Jul 26, 2012
    Last edited: Jul 26, 2012
  10. JonB

    JonB Member

    Joined:
    Nov 28, 2011
    Messages:
    30
    Likes Received:
    0
    My cman is running, but the resource group manager is not, and it will not start. See http://forum.proxmox.com/threads/10425-rgmanager-group-join-failed-1-1 for a thread about that problem.
     
    #10 JonB, Jul 26, 2012
    Last edited: Jul 26, 2012
  11. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,338
    Likes Received:
    375
    pls open new thread for problems.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. SectorNine50

    SectorNine50 New Member

    Joined:
    Jul 13, 2012
    Messages:
    9
    Likes Received:
    0
    Hi Tom,
    I'm new to Proxmox, so I might've missed the memo on which CPU to use for Windows deployments. I used the default QEMU CPU. However, I was only one version back, so I'm not sure why this happened. It only happened to my 2003 VM as well, my 2008 R2 one was fine.
     
  13. spirit

    spirit Well-Known Member

    Joined:
    Apr 2, 2010
    Messages:
    3,243
    Likes Received:
    121
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. kobuki

    kobuki Member

    Joined:
    Dec 30, 2008
    Messages:
    457
    Likes Received:
    21
    Cool, thanks. Time to upgrade my node with my dirty little hack.
     
  15. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,466
    Likes Received:
    94
    Hi Martin,

    Upgraded today from pve-12 to pve-13. Using the instructions given in this thread everything works like a charm - HA and everything. I have only discovered one annoyance which I also noticed upgrading from pve-11 to pve-12: pve-headers are not automatically upgraded which is not so good since I have some kernel modules maintained via DKMS. If you don't remember to upgrade the pve-headers before running aptitude upgrade these kernel modules will not be compiled for the new kernel.

    Michael.
     
  16. Smanux

    Smanux Member

    Joined:
    Jun 29, 2009
    Messages:
    35
    Likes Received:
    1
    I've got a network issue with the new kernel, after upgrading from 2.6.32-12-pve the network is no longer working. The ethernet controller is :

    Intel Corporation 82579V Gigabit Network Connection (rev 05)
     
  17. jmbeuken

    jmbeuken New Member

    Joined:
    Mar 14, 2011
    Messages:
    8
    Likes Received:
    0
    Hi,

    what is the most safe recipe to update a cluster ( 3 nodes ) without interrupting service ?

    - can I update each node one after another after migrate on-line CT on other node ?
    - must I reboot the updated node ?

    current version on the 3 nodes :

    I have already experienced a big problem of restarting the cluster when upgrading from 2.0 to 2.1 ...:confused:

    thank this great software !

    best regards
     
  18. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,338
    Likes Received:
    375
    do your run HA? CT´or KVM? what kind of storage are you disks?

    pls provide full details about your cluster, and post it in a new thread.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  19. nick

    nick Member

    Joined:
    Mar 28, 2007
    Messages:
    364
    Likes Received:
    1
    HI all,

    I make today an update to a test environment to kernel 2.6.32-13 and after that all VM (QEMU) work very slow and the RAM allocation is wrong. from 8GB available on server was allocate only 1,2GB to 4 VM (configurated with 1GB each).
    I reboot back to kernel 2.6.32-11 and now everything it's back to normal.

    I change grub config file and set kernel 2.6.32-11 as default and wait for next release.

    It's possible to reinstall kernel 2.6.32-12 - was removed during the update.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice