Proxmox VE 3.3 released!

3.10 is technology preview (available in all repositories)
 
dpkg -s pve-qemu-kvm
Package: pve-qemu-kvm
Status: install ok installed
Priority: extra
Section: admin
Installed-Size: 12729
Maintainer: Proxmox Support Team <support@proxmox.com>
Architecture: amd64
Version: 2.1-5
 
edit
/etc/apt/sources.list.d/ceph.list

be sure to have firefly

deb http://ceph.com/debian-firefly wheezy main


Then,
#apt-get update
#apt-get dist-upgrade.

You'll have proxmox and ceph updated at the same time.

After each node upgrade,
check that ceph is sync with other nodes (#ceph health)

Hello, i have 3nodes with ceph server version 0.72.2 (emperor) and i can upgrade it to firefly

If i understand correctly, on the node1 i change:
deb http://ceph.com/debian-emperor wheezy main
to
deb http://ceph.com/debian-firefly wheezy main
then
# apt-get update
# apt-get dist-upgrade
and then reboot node1
and continue the same procedure on node2 and node3?

This means that after the upgrade is complete on node1 will node2,3 version ceph emperor and on node1 version ceph firefly?

It will not sync problems between different versions?

version:
proxmox-ve-2.6.32: 3.2-132 (running kernel: 2.6.32-31-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Thank you
 
Hello, i have 3nodes with ceph server version 0.72.2 (emperor) and i can upgrade it to firefly

If i understand correctly, on the node1 i change:
deb http://ceph.com/debian-emperor wheezy main
to
deb http://ceph.com/debian-firefly wheezy main
then
# apt-get update
# apt-get dist-upgrade
and then reboot node1
and continue the same procedure on node2 and node3?

This means that after the upgrade is complete on node1 will node2,3 version ceph emperor and on node1 version ceph firefly?

It will not sync problems between different versions?
I will suggest that you shut down all your VMs than do the upgrade. Ceph will take care of rebalancing after upgrade. You will not lose any data even if you are doing one node at a time.
 
On 17th Sep 2014, downloaded the latest v3.3-2 ISO from http://www.proxmox.com/images/download/pve/iso/proxmox-ve_3.3-a06c9f73-2.iso and the MD5 is
b2531905a538bf01eebc25ee41aba0dc which matches the one at:
http://www.proxmox.com/downloads/item/proxmox-ve-3-3-iso-installer

The file proxmox/packages/pve-manager_3.3-1_amd64.deb in the ISO image (extracted using 7-Zip and UltraISO) seems corrupted as it's file details are:
MD5: F07FFFAF54B3EB8D66134B414F077F64
File Size: 3,873,914 bytes
File Date: 12th Sep 2014

The same file when taken from the repo has different details:
MD5: 36C9163EEFEEE9956FC647F98C7EA052
Location: http://download.proxmox.com/debian/...tion/binary-amd64/pve-manager_3.3-1_amd64.deb
File Size: 3,873,948 bytes
File Date: 15th Sep 2014

On an apt-get update && apt-get dist-upgrade, the file from the repo did not get installed since the version number in the repo was the same. The one on the CD could not be opened in 7-Zip but the one from the web repo could be opened.
 
Last edited:
The file proxmox/packages/pve-manager_3.3-1_amd64.deb in the ISO image (extracted using 7-Zip and UltraISO) seems corrupted as it's file details are:
MD5: F07FFFAF54B3EB8D66134B414F077F64

Our CD is does not have any errors. I just verified that with:

# mkdir testmount
# md5sum proxmox-ve_3.3-a06c9f73-2.iso
b2531905a538bf01eebc25ee41aba0dc proxmox-ve_3.3-a06c9f73-2.iso
# mount -o loop proxmox-ve_3.3-a06c9f73-2.iso testmount/
# md5sum testmount/proxmox/packages/pve-manager_3.3-1_amd64.deb
36c9163eefeee9956fc647f98c7ea052 testmount/proxmox/packages/pve-manager_3.3-1_amd64.deb

Note: Maybe UltraISO has problems with compressed CDs
 
Hello, i have 3nodes with ceph server version 0.72.2 (emperor) and i can upgrade it to firefly

If i understand correctly, on the node1 i change:
deb http://ceph.com/debian-emperor wheezy main
to
deb http://ceph.com/debian-firefly wheezy main
then
# apt-get update
# apt-get dist-upgrade
and then reboot node1
and continue the same procedure on node2 and node3?

This means that after the upgrade is complete on node1 will node2,3 version ceph emperor and on node1 version ceph firefly?

It will not sync problems between different versions?

The normal upgrade procedures for ceph apply. First of all you should upgrade ceph before upgrading proxmox (as proxmox suggests). Then you upgrade the ceph packages on all nodes at the same time (ceph daemons will continue running the old version until restarted!). Then you can restart the daemons one by one in this order: monA,monB,monC,[...],osd0,osd1,osd2...

Please be aware that an OSD restart takes about 20-40 seconds to reinitialize everything and ceph goes back to HEALTH_OK
 
Maybe UltraISO has problems with compressed CDs
Yes, and so is the case with 7-Zip and that too if we try to extract such 64 bit compressions from the CD image in a 32 bit WinXP but not so when they are downloaded from the web!
This is possibly the reason why the CD installed okay anyway!.
Anyone with experience with extracting it in AnyISO in WinXP 32 bit?
 
First of all: thanks to the Proxmox Team for such great features!

Unfortunately I got problems concerning my LVM setup.

I installed Proxmox (3.1) on top of a fresh Debian installation (following the very good wiki guide) a few weeks ago. I set up a bunch of hdd's as my system drives with the default volumes and added some data LV's on my HW RAID. Everything worked pretty fine until the dist-upgrade of today. I decided to reboot the node after the upgrade and ended up in a reseted LVM setting. None of my data LV's are found during boot while the default LV's pve and swap (which I asume they are the default LV's) are still there.

So, is there a way to get back my LVM settings from before the upgrade?

btw, I'm no pro but also no absolute beginner in administrating linux systems. But my knowledge about Proxmox and LVM is kind of base level. It would be great if the advice (if any) consider that fact...
 
The upgrade does not touch the LVM settings.

It is never the first thing I think of. :)

I didn't even need much time to find out that the LVM (and the mdadm) can't be able to set the RAID back to what I set, as long as the harddisks are not be recognized. Well, the HDDs connected to the MB are, the HDDs on the RAID controller (an Adaptec 5805QZ) are not.

After spending a lot of time on searching the internet for a solution and following a bunch of needless hints I have to assume that one of the changes made lately to the system (the kernel?) is responsible for the loss of the kernels ability to recognize the RAID controller itself. Rebooting into one of the older kernels didn't solve the problem. I even downloaded the source of an Adaptec driver (aacraid-1.2.1-40700). I would give it a try. I just don't know how to compile the driver. And I don't even know if I am right anyway.

I got stuck once again... :(
 
Last edited:
I know you guys love to help noobs like me (which I can see from all the helpful answers in this forum). But sometimes it is better to be able to help myself, due to an immense learning effect. I finally got my RAID controller running.

First of all: it actually was the kernel upgrade that caused the problem. The new kernel came with a newer aacraid driver. A modinfo aacraid gave the version 1.2-1 [something]. It seems that driver version doesn't support the 5805ZQ anymore.

Anyway, Adaptec still offers the latest version supporting my RAID controller - as standard source and as a DKMS version. Since I couldn't find the pve kernel source I decided to use the DKMS version. For those asking themselves how to compile and integrate such drivers: the following list gives a short explanation-less list of commands I used to get the driver compiled and running with the kernel version 2.6.32-33-pve:

$> apt-get install rpm dkms build-essential
$> apt-get install pve-headers-2.6.32-33-pve

$> mkdir -p /tmp/aacraid && cd /tmp/aacraid
[Download raid Driver aacraid-dkms-1.1.7-29100.tgz to tmp folder]


$> tar xvzf aacraid-dkms-1.1.7-29100.tgz
[you should now find two .rpm files in this folder]


$> rpm2cpio aacraid-1.1.7.29100-dkms.noarch.rpm | (cd / ; cpio -idmu )
$> dkms add -m aacraid -v 1.1.7.29100
$> dkms build -m aacraid -v 1.1.7.29100
$> dkms install -m aacraid -v 1.1.7.29100

$> modinfo aacraid | grep -i version

For me it worked for me. Even though I had to do a lot of reseach to get that far, I don't think I even had any idea what or how to do without your help. Thanks a lot for that, guys!
 
Thanks for the really wunderfull new version, works like a charm!

Regards from Gleisdorf/Steiermark
 
The ACL system do not yet allow editing of existing permissions from within the GUI. Also the Roles assigned to PAM Groups do not get inherited by the users belonging to the group in the GUI unless they already exist. Kernel 138 / PVE 3.3-2. Roles assigned directly to users are obeyed. The /pools/ prefix for path in permissions needs to be stated somewhere. Any wildcard prefixes such as /vms/10* would be useful in the path for permissions.

The rules for the Linux PAM users are obeyed only if the user and group exist in the system prior to assignment:
Code:
# Add a Linux User
useradd survive

# Assign a Password to the user
passwd survive

# Add a Linux Group
groupadd Watchman

# Assign the user to the group
usermod -a -G Watchman survive
 
Last edited:
I am currently on 3.2 but when I do apt-get update and apt-get dist-upgrade it says there are 0 packages to upgrade?

EDIT: I had to update my sources.list file using the link on the first page, now updated.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!