Proxmox VE 3.0 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We just released Proxmox VE 3.0. Its based on the great Debian 7.0 release (Wheezy) and introduces a great new feature set:

VM Templates and Clones

Under the hood, many improvements and optimizations are done, most important is the replacement of Apache2 by our own event driven API server.

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

Release notes
See http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.0

Download
http://www.proxmox.com/downloads

Upgrade from any 2.3 to 3.0
http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0

Install Proxmox VE on Debian Wheezy

http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Congratulations! Just upgraded from 2.3 without problems using the script.
My router kvm seems snappier, but that may just be wishful thinking ;)
 
I just upgraded my 2.3 as well - No complications except a package blocking the upgrade, removed the package and everything started up smooth.

Good job guys!
 
Be careful when using the upgrade script and read manual before run it!
./pve-upgrade-2.3-to-3.0 --purge

This command will remove the package mdadm and I have some troubles for restore LVM with pv on softRAID... It's my fault! I have not read the manual before run script. No claims to the update script
 
I have a trouble, after upgrade 2.3 to 3.0 my cluster from 2 computers, i have alert! /dev/disk/by-uuid/ does not exist. Somebody can help ?
Update with ./pve-upgrade-2.3-to-3.0. Same trouble on both nodes.
 
Last edited:
I have a trouble, after upgrade 2.3 to 3.0 my cluster from 2 computers, i have alert! /dev/disk/by-uuid/ does not exist. Somebody can help ?

If you run
Code:
./pve-upgrade-2.3-to-3.0 --purge
you lost mdadm and mdadm.conf
Check it and re-install if needed
 
No i run it without purge.
And i'm not use raid. 3 disks in lvm.
 
Last edited:
I check this in the evening ) But same error on two nodes, after upgrade, heh. I do not belive to such coinciedences.

Please, post output for:
Code:
pvdisplay
Code:
vgdisplay
Code:
lvdisplay
 
Ok, i've checked this.

My LVM is ok, and uuid is correct, but somehow , i not have nodes on /dev/mapper on boot,
i can make it manual
Code:
vgchange -ay
, after this volumes nodes was created in /dev/mapper and i have message 3 volumes of pve is active now.
and after it i just can type
Code:
exit
, boot process continue normally.

but how fix this trouble? somebody know?
 
Last edited:
Loving the new Proxmox 3.0. Just need the billing guys to catch up ;)

Maybe you can help me out? HostBill is waiting for more demand to upgrade their module to 3.0 compatibility. Maybe if you have both HostBill and Proxmox you could send them a quick support ticket?
 
I not solved trouble with hang up in initramfs, my cluster is exprerimental, not production. i just reinstall all.
I think it's happen because i added HDD to server, and expand local storage.
 
Just a quick question - does Proxmox 3.0 still use kernel 2.6?

I have updated two Proxmox 2.3 servers to 3.0 today and one of them uses kernel 3.2 (3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2), the other one uses 2.6 (2.6.32-20-pve).
Both work fine... but which one is the right one?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!