Proxmox VE 3.4 released!

Status
Not open for further replies.

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We just released Proxmox VE 3.4 - the new version is centered around ZFS, including a new ISO installer supporting all ZFS raid levels. Small but quite useful additions includes hotplug, pending changes, start/stop/migrate all VM´s and the network disconnect (unplug virtual NIC).

Check our short video tutorial - What´s new in Proxmox VE 3.4!

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

Note: Please do NOT upgrade via GUI, just use the CLI (See this forum post for explanation)

Release notes
http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.4

Video Tutorials
http://www.proxmox.com/training/video-tutorials

Download
http://www.proxmox.com/downloads/category/iso-images-pve

Package Repositories
http://pve.proxmox.com/wiki/Package_repositories
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Disconnecting a NIC, hotplug, pending changes : wonderful!

Great job!

Many Thanx to proxmox team,

Christophe.
 
Great job indeed! I've played with ZFS with Proxmox RC version and is an AMAZING fs (we will see the performance).
Just can't see the video since I don't use proprietary flash plugin (I'm in GNU/Linux), do you mind provide us a download link for the video (so I can play it with VLC) and/or in WebM format with html5 support (no flash required)?
 
ZFS is as an additional local filesystem, so the answer is no (nothing replicated or shared outside the local host).
 
Thanks for the Work!

But i have a Problem:

aptitude update
aptitude full-upgrade
...
Setting up numactl (2.0.8~rc4-1) ...
Setting up parted (3.2-6~bpo70+1) ...
Setting up pve-kernel-2.6.32-37-pve (2.6.32-147) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/dkms 2.6.32-37-pve /boot/vmlinuz-2.6.32-37-pve
Error! Your kernel headers for kernel 2.6.32-37-pve cannot be found.
Please install the linux-headers-2.6.32-37-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located
Error! Your kernel headers for kernel 2.6.32-37-pve cannot be found.
Please install the linux-headers-2.6.32-37-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.32-37-pve /boot/vmlinuz-2.6.32-37-pve
update-initramfs: Generating /boot/initrd.img-2.6.32-37-pve
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 2.6.32-37-pve /boot/vmlinuz-2.6.32-37-pve

And now it depends...
 
After reboot, i do a "dpkg --configure -a"

Setting up libev-perl (4.11-2) ...
Setting up pve-qemu-kvm (2.1-12) ...
Setting up pve-kernel-2.6.32-37-pve (2.6.32-147) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/dkms 2.6.32-37-pve /boot/vmlinuz-2.6.32-37-pve
Error! Your kernel headers for kernel 2.6.32-37-pve cannot be found.
Please install the linux-headers-2.6.32-37-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located
Error! Your kernel headers for kernel 2.6.32-37-pve cannot be found.
Please install the linux-headers-2.6.32-37-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.32-37-pve /boot/vmlinuz-2.6.32-37-pve
update-initramfs: Generating /boot/initrd.img-2.6.32-37-pve
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 2.6.32-37-pve /boot/vmlinuz-2.6.32-37-pve
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.10.0-5-pve
Found initrd image: /boot/initrd.img-3.10.0-5-pve
Found linux image: /boot/vmlinuz-3.10.0-4-pve
Found initrd image: /boot/initrd.img-3.10.0-4-pve
Found linux image: /boot/vmlinuz-2.6.32-37-pve
Found initrd image: /boot/initrd.img-2.6.32-37-pve
Found linux image: /boot/vmlinuz-2.6.32-34-pve
Found initrd image: /boot/initrd.img-2.6.32-34-pve
Found linux image: /boot/vmlinuz-2.6.32-32-pve
Found initrd image: /boot/initrd.img-2.6.32-32-pve
Found linux image: /boot/vmlinuz-2.6.32-31-pve
Found initrd image: /boot/initrd.img-2.6.32-31-pve
Found linux image: /boot/vmlinuz-2.6.32-28-pve
Found initrd image: /boot/initrd.img-2.6.32-28-pve
Found linux image: /boot/vmlinuz-2.6.32-27-pve
Found initrd image: /boot/initrd.img-2.6.32-27-pve
Found linux image: /boot/vmlinuz-2.6.32-26-pve
Found initrd image: /boot/initrd.img-2.6.32-26-pve
Found linux image: /boot/vmlinuz-2.6.32-25-pve
Found initrd image: /boot/initrd.img-2.6.32-25-pve
Found linux image: /boot/vmlinuz-2.6.32-23-pve
Found initrd image: /boot/initrd.img-2.6.32-23-pve
Found memtest86+ image: /memtest86+.bin
Found memtest86+ multiboot image: /memtest86+_multiboot.bin
done
Setting up pve-firewall (1.0-18) ...
Installing new version of config file /etc/init.d/pve-firewall ...
Restarting PVE firewall logger: pvefw-logger.
Restarting Proxmox VE firewall: pve-firewall.
Setting up qemu-server (3.3-20) ...
Setting up pve-manager (3.4-1) ...
Installing new version of config file /etc/init.d/pve-manager ...
Installing new version of config file /etc/init.d/pvedaemon ...
Installing new version of config file /etc/init.d/pveproxy ...
Installing new version of config file /etc/init.d/spiceproxy ...
Installing new version of config file /etc/init.d/pvestatd ...
Restarting PVE Daemon: pvedaemon.
Restarting PVE API Proxy Server: pveproxy.
Restarting PVE SPICE Proxy Server: spiceproxy.
Restarting PVE Status Daemon: pvestatd.
Setting up proxmox-ve-2.6.32 (3.3-147) ...
installing proxmox release key: OK

What is the Problem?
 
After reboot, i do a "dpkg --configure -a"



What is the Problem?

you have a custom installation, we do not use DKMS. so we do not now what you did exactly on your system.

reading your logs: you do not have "kernel headers for kernel 2.6.32-37-pve " installed, install it, maybe this fixes you issue.
 
Kernel pve-kernel-2.6.32-37-pve breaks quorum. Booting on the on kernel pve-kernel-2.6.32-34-pve immediately restores quorum.

Setup:
2 pve nodes
1 quorum disk

When I boot on the new kernel boot on console and in syslog everything is as excepted. Quorum disk is attached, quorum information is exchanged and communication is occurring.

The thing breaks when the new kernel write 'Waiting for quorum' to console. The message is that connection times out and after this point the bridge (vmbr0) where quorum information is supposed to flow is suddenly not available. I cannot ping any remote host on this bridge and I am even not able to ping my own address on this bridge. All other bridges are working normally. For the record: the pve-firewall is not in use and it has no effect whether I configure it to be started and boot or not.

Net config:
# network interface settings
auto lo
iface lo inet loopback


auto eth1
iface eth1 inet manual


auto eth2
iface eth2 inet manual


auto bond0
iface bond0 inet manual
slaves eth1 eth2
bond_miimon 100
bond_mode 802.3ad
bond-xmit-hash-policy layer3+4
bond_lacp_rate fast


auto vmbr0
iface vmbr0 inet static
address 192.168.2.9
netmask 255.255.255.0
gateway 192.168.2.1
bridge_ports bond0
bridge_stp off
bridge_fd 0

.....
 
Congrats to PVE team! Great work (as usual).ZFS integration in pve-kernel is a dream come true...
 
Excellent job proxmox team!!!! I think by the end 2015 Proxmox will be one of the mainstream vertualization solutions world over.
 
Ahhh....I had installed ZFSonLinux according to the instructions of http://pve.proxmox.com/mediawiki/index.php?title=ZFS&oldid=6332. But ZFS is now included, what do I do now?

Hi.
We are on the same boat here.
We hang up at the same part.
We investigated by trying to run the same command we saw with -v (update-grub) and it only appears to grab block devices but not partitions or mount points. Kind of weird, but not sure expected outcome from another box.

We've tried to apt-get install --reinstall zfs-dkms from the error and then it tries to go back to grub and freaks out. We also tried running dpkg --configure -a.

We just tried to format our root and go with 3.4 but we can't get our box to boot it. We use a Dell R900 and it used to boot great. The 3.4 iso we are using installed fine, too... but we never boot into grub.
 
Thanks for that release, but I have 2 questions:

- does the new ISO support UEFI boot?
- is there a possibility to boot and install with 3.10 kernel from the ISO (asking for HP DL160gen9 servers..)
 
Thanks for that release, but I have 2 questions:

- does the new ISO support UEFI boot?
- is there a possibility to boot and install with 3.10 kernel from the ISO (asking for HP DL160gen9 servers..)

the ISO supports EFI boot, test it and give feedback. the ISO does not contain 3.10 kernel, but why does the 2.6.32 not work on gen9 servers, what is the problem?
 
Thans, Tom - we'll test EFI boot.

We did not try PVE 3.4 on gen9 servers yet, but with PVE 3.3 and 2.6.32 the "HP Smart Array P840/4G FIO Controller" was not recognized, so we needed to boot CentOS 7 live cd and setup Debian 7 with debootstrap and install PVE + 3.10 kernel in chroot - which came up without any problems then.

Would be nice if it not needs this procedure on every new PVE installation as we only use gen9 servers from now on ;)
 
ZFS is as an additional local filesystem, so the answer is no (nothing replicated or shared outside the local host).

Hello,
I have read it, but, is it possible to have with 2 nodes :
- zfs raid 1 for OS debian / proxmox
- zfs raid 1 for VMs with DRBD ?

Thank you for this very good job :p
 
Status
Not open for further replies.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!