KVM 1.1 and new Kernel

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
748
1,626
223
We just moved a bunch of new packages to our stable repository, including latest stable OpenVZ Kernel (042stab057.1), stable KVM 1.1.1, new cluster package, bug fixes and a lot of code cleanups.

Additionally, we added first packages to support two distributed storage technologies - ceph (client) and sheepdog. Both technologies are looking great but note, currently it’s not yet ready for production use.

Important Note for HA setups:
the redhat-cluster-pve package provides new defaults and you need to accept the new one - answer with Y here. as soon as the installations is finished, you need to enable fencing again, see http://pve.proxmox.com/wiki/Fencing#Enable_fencing_on_all_nodes

After installation and reboot (aptitude update && aptitude full-upgrade), your 'pveversion -v' should look like this:
Code:
pveversion -v


pve-manager: 2.1-12 (pve-manager/2.1/be112d89)
running kernel: 2.6.32-13-pve
proxmox-ve-2.6.32: 2.1-72
pve-kernel-2.6.32-13-pve: 2.6.32-72
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-45
pve-firmware: 1.0-17
libpve-common-perl: 1.0-28
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-26
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-6
ksm-control-daemon: 1.1-1
 
As a heads up to those with Windows 2003 VM's, it appears that the update causes Windows Activation to think that the "hardware changed significantly." I had to call Microsoft to get the activation key; not a big deal, but good to know before hand.
 
only if you come from a quite old KVM - should not happen if you upgrade from 1.0.

what cpu type do you use for your win guest? Maybe you should choose one which is not changing by every upgrade of KVM.
 
@martin there is just a copy/paste orphan in the news page, a "Proxmox VE 2.0 final release!" should be removed...

@tom <<Maybe you should choose one which is not changing by every upgrade of KVM.>> which ones or, which are likely to change with kvm?

Marco
 
@martin there is just a copy/paste orphan in the news page, a "Proxmox VE 2.0 final release!" should be removed...

which news page?

@tom <<Maybe you should choose one which is not changing by every upgrade of KVM.>> which ones or, which are likely to change with kvm?

Marco

e.g. 'host'
 
Hi,

Latest update seems to broke our cluster setup. On all nodes cman stops working. Manually tried to start it on all nodes:

root@kvm45:~# /etc/init.d/cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Unfencing self... [ OK ]

But they stop in a few seconds as you see.

root@kvm45:~# pveversion -v
pve-manager: 2.1-12 (pve-manager/2.1/be112d89)
running kernel: 2.6.32-13-pve
proxmox-ve-2.6.32: 2.1-72
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-13-pve: 2.6.32-72
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-45
pve-firmware: 1.0-17
libpve-common-perl: 1.0-28
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-27
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-6
ksm-control-daemon: 1.1-1


Edit: I can open webadmins of the nodes one by one and start the VMs via each nodes webadmins. But on each node webadmin, the other nodes seem offline.

Edit2: I get these on nodes:
Jul 26 13:17:19 corosync [CMAN ] Activity suspended on this node
Jul 26 13:17:19 corosync [CMAN ] Error reloading the configuration, will retry every second
Jul 26 13:17:20 corosync [CMAN ] Unable to load new config in corosync: New configuration version has to be newer than current running configuration


Jul 26 13:17:20 corosync [CMAN ] Can't get updated config version 6: New configuration version has to be newer than current running configuration
.
Jul 26 13:17:20 corosync [CMAN ] Activity suspended on this node
Jul 26 13:17:20 corosync [CMAN ] Error reloading the configuration, will retry every second
Jul 26 13:17:21 corosync [CMAN ] Unable to load new config in corosync: New configuration version has to be newer than current running configuration


Jul 26 13:17:21 corosync [CMAN ] Can't get updated config version 6: New configuration version has to be newer than current running configuration
.
Jul 26 13:17:21 corosync [CMAN ] Activity suspended on this node
Jul 26 13:17:21 corosync [CMAN ] Error reloading the configuration, will retry every second



How can I fix this?
 
Last edited:
Hi,

Latest update seems to broke our cluster setup. On all nodes cman stops working. Manually tried to start it on all nodes:

root@kvm45:~# /etc/init.d/cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Unfencing self... [ OK ]

But they stop in a few seconds as you see.
My cman is running, but the resource group manager is not, and it will not start. See http://forum.proxmox.com/threads/10425-rgmanager-group-join-failed-1-1 for a thread about that problem.
 
Last edited:
pls open new thread for problems.
 
only if you come from a quite old KVM - should not happen if you upgrade from 1.0.

what cpu type do you use for your win guest? Maybe you should choose one which is not changing by every upgrade of KVM.

Hi Tom,
I'm new to Proxmox, so I might've missed the memo on which CPU to use for Windows deployments. I used the default QEMU CPU. However, I was only one version back, so I'm not sure why this happened. It only happened to my 2003 VM as well, my 2008 R2 one was fine.
 
Hi Martin,

Upgraded today from pve-12 to pve-13. Using the instructions given in this thread everything works like a charm - HA and everything. I have only discovered one annoyance which I also noticed upgrading from pve-11 to pve-12: pve-headers are not automatically upgraded which is not so good since I have some kernel modules maintained via DKMS. If you don't remember to upgrade the pve-headers before running aptitude upgrade these kernel modules will not be compiled for the new kernel.

Michael.
 
I've got a network issue with the new kernel, after upgrading from 2.6.32-12-pve the network is no longer working. The ethernet controller is :

Intel Corporation 82579V Gigabit Network Connection (rev 05)
 
Hi,

what is the most safe recipe to update a cluster ( 3 nodes ) without interrupting service ?

- can I update each node one after another after migrate on-line CT on other node ?
- must I reboot the updated node ?

current version on the 3 nodes :

pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-39
pve-firmware: 1.0-15
libpve-common-perl: 1.0-27
libpve-access-control: 1.0-21
libpve-storage-perl: 2.0-18
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1

I have already experienced a big problem of restarting the cluster when upgrading from 2.0 to 2.1 ...:confused:

thank this great software !

best regards
 
do your run HA? CT´or KVM? what kind of storage are you disks?

pls provide full details about your cluster, and post it in a new thread.
 
HI all,

I make today an update to a test environment to kernel 2.6.32-13 and after that all VM (QEMU) work very slow and the RAM allocation is wrong. from 8GB available on server was allocate only 1,2GB to 4 VM (configurated with 1GB each).
I reboot back to kernel 2.6.32-11 and now everything it's back to normal.

I change grub config file and set kernel 2.6.32-11 as default and wait for next release.

It's possible to reinstall kernel 2.6.32-12 - was removed during the update.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!