New Kernel and bug fixes

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We just moved a bunch of new packages to our stable repository, including latest stable OpenVZ Kernel (042stab059.7), bug fixes and a lot of code cleanups.

Note: KVM guests needs to be stopped and started again to apply the new settings.

Release notes

- libpve-common-perl_1.0-30_all.deb
  • fix regex for network devices (support more than 10 devices)
- libpve-storage-perl_2.0-30_all.deb
  • add volume_resize functions
- pve-kernel-2.6.32-14-pve_2.6.32-73_amd64.deb
  • update to vzkernel-2.6.32-042stab059.7.src.rpm
- pve-manager_2.1-14_all.deb
  • fix startup ordering
  • update Danish translation
- pve-qemu-kvm_1.1-7_amd64.deb
  • Enable VeNCrypt PLAIN authentication
- qemu-server_2.0-48_amd64.deb
  • add size hint to drive options
  • new 'qm resize' command
  • implement virtio-scsi-pci controller
- redhat-cluster-pve_3.1.92-3_amd64.deb
  • fix bug #238: do not stop cman when not quorate
  • fix bug #112: correctly determine Linux distribution
After installation and reboot (aptitude update && aptitude full-upgrade), your 'pveversion -v' should look like this:
Code:
pveversion -v

pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-73
pve-kernel-2.6.32-14-pve: 2.6.32-73
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-48
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-30
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-7
ksm-control-daemon: 1.1-1
 
Last edited by a moderator:
will work only for latest linux kernels (3.4.x as far as I remember) or windows.

see this example of disk in one of my VMID.conf (its a windows box, using latest virtio drivers from the fedora project):

Code:
scsihw: virtio-scsi-pci
scsi0: local:203/vm-203-disk-4.raw
 
create a scsi disk, then adapt the config file via CLI. GUI integration will follow later.
 
I see it requires at least kernel 3.4 so I will make some unscientific tests on Ubuntu 12.10 - latest beta comes with kernel 3.5.
 
I see it requires at least kernel 3.4 so I will make some unscientific tests on Ubuntu 12.10 - latest beta comes with kernel 3.5.
Doesn't look promissing. Running iozone on a device using virtio-scsi-pci will crash the VM. Just to ensure I was not hit by bad luck once I ran the same test 3 times, result: the VM crashed every time.
 
Tried Fedora 17 instead but this is even worse since Fedora wont even boot if I apply the option '
scsihw: virtio-scsi-pci'. Tried both with storage local and on iSCSI, same result: no boot.
 
Installed Debian Wheezy and using kernel 3.4.4 from experimental can boot but running iozone crashes the VM:confused:

If others can verify my findings I think the new interface should be categorized as severely broken and unfit for any usage what so ever. Just my few cents of opinion.
 
running experimental kernels never leads to a stable system. the virtio-scsi-pci is new so there could be some issues. I primarily tested on windows guests and I did not see a crash.
 
running experimental kernels never leads to a stable system. the virtio-scsi-pci is new so there could be some issues. I primarily tested on windows guests and I did not see a crash.
So what you say is basically don't use this feature?
If using kernel 3.4 is considered using experimental kernels and the requirements for using this feature is kernel 3.4 or higher then I see no usecase for this feature. Though, Fedora 17 is an official release with kernel 3.4 and this wont even boot if you activate this feature so IMHO a feature which, if we are lowering the bar, cannot be considered anything more than alfa level has be introduced into stable proxmox.
 
So what you say is basically don't use this feature?
If using kernel 3.4 is considered using experimental kernels and the requirements for using this feature is kernel 3.4 or higher then I see no usecase for this feature. Though, Fedora 17 is an official release with kernel 3.4 and this wont even boot if you activate this feature so IMHO a feature which, if we are lowering the bar, cannot be considered anything more than alfa level has be introduced into stable proxmox.

I just installed a Win7 and a Fedora 17 using virtio-scsi as boot disk. Both run without any problems. Seems you do something wrong.?
 
I just installed a Win7 and a Fedora 17 using virtio-scsi as boot disk. Both run without any problems. Seems you do something wrong.?
I think I have found the reason: IOMMU. My chipset does either not support IOMMU or the support on my MB is broken. I do have this option for the kernel 'amd_iommu=on' and when this is on I do not see this in the messages 'kernel: AMD-Vi disabled by default: pass amd_iommu=on to enable'.
 
Thanks for this.
On the off chance there might be a problem with the packages instead of behind the keyboard:
Wanted to upgrade a Debian box with the PVE 2.1 packages installed from -13 to -14.
aptitude update && aptitude full-upgrade failed with
Code:
dpkg: dependency problems prevent configuration of vzctl:
 vzctl depends on pve-cluster; however:
  Package pve-cluster is not installed.
Perhaps my system was already wonky, perhaps it was something else.

In the end, this got it upgraded & working again (note: I'm obviously not running a cluster - just a single hobby server):
Code:
# try to remove possibly outdated cluster config stuff
aptitude purge clvm ceph-common redhat-cluster-pve pve-cluster
aptitude remove proxmox-virtual-environment
aptitude install pve-kernel-2.6.32-14-pve
# reboot with only our kernel:
shutdown -r now
# make sure pve-cluster installs and configures:
aptitude install pve-cluster
#this time, config worked!
# now install the rest:
aptitude install proxmox-virtual-environment
# ... and reboot to test:
shutdown -r now

Now:
Code:
pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-73
pve-kernel-2.6.32-14-pve: 2.6.32-73
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-48
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-30
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-7
ksm-control-daemon: 1.1-1

Thanks.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!