Proxmox VE 5.0 beta1 released!

VM not start if network rate limit (like 12.5).
RTNETLINK answers: Invalid argument
We have an error talking to the kernel
command '/sbin/tc filter add dev tap301i0 parent ffff: prio 50 basic police rate 13107200bps burst 1048576b mtu 64kb drop flowid :1' failed: exit code 2

PVE 5.0.6 4.10.5-1
 
VM not start if network rate limit (like 12.5).
RTNETLINK answers: Invalid argument
We have an error talking to the kernel
command '/sbin/tc filter add dev tap301i0 parent ffff: prio 50 basic police rate 13107200bps burst 1048576b mtu 64kb drop flowid :1' failed: exit code 2

PVE 5.0.6 4.10.5-1

could you open a bug report at bugzilla.proxmox.com for easier tracking? thanks
 
Can see that you did a update on the beta today. Great work!!

Do you have a list of fixes/bugs that has been ironed out with this release?
 
Can't start any vms with net rate limiting on after i just updated.
Can't even turn rate limiting off, can't use 'qm' based commands, giving
Segmentation fault.
Basically proxmox is non functional

Jesus...
 
Hello,

I will probably ask a very noob question..
If I go with this 5.0 beta 1, how the upgrade will happen when the stable 5.0 release will be issued?
Will I need to reinstall, or will it be possible to just run an apt-get upgrade?

In other words, starting with this beta, will I end up after the stable release will be released to the same state as if I installed directly the stable release?

Thank you very much..
 
Hi Guys,

I've just run an upgrade from 4.4 to 5.0-6/669657df using the apt dist-upgrade method.

Everything seems to have run properly however LXC containers won't come up, I am getting a "cpu id '16' is out of range" error in the front end and am seeing the following in /var/log/pve/tasks/active for every LXC container I try to start:

Code:
UPID:bruce:00001E25:0001D077:58ED81E3:vzstart:100:root@pam: 1 58ED81E3 cpu id '16' is out of range
UPID:bruce:000017BC:0000ED3A:58ED7F9D:vzstart:100:root@pam: 1 58ED7F9D cpu id '16' is out of range
UPID:bruce:00001658:00009BC9:58ED7ECD:vzstart:100:root@pam: 1 58ED7ECD cpu id '16' is out of range

Code:
root@bruce:~# pveversion -v
proxmox-ve: 5.0-5 (running kernel: 4.10.5-1-pve)
pve-manager: 5.0-6 (running version: 5.0-6/669657df)
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.10.5-1-pve: 4.10.5-5
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.44-1-pve: 4.4.44-84
libpve-http-server-perl: 2.0-1
lvm2: 2.02.168-pve1
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-4
qemu-server: 5.0-2
pve-firmware: 2.0-2
libpve-common-perl: 5.0-7
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-3
libpve-storage-perl: 5.0-3
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.9.0-1~rc3
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
openvswitch-switch: 2.7.0-2
root@bruce:~#

I am also getting the following errors in /var/log/syslog

Code:
Apr 12 12:58:36 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range
Apr 12 12:58:46 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range
Apr 12 12:58:56 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range
Apr 12 12:59:06 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range
Apr 12 12:59:17 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range

CPU ID 16 seems to exist as well

Code:
root@bruce:/etc# cat /proc/cpuinfo | grep processor
processor : 0
processor : 1
processor : 4
processor : 6
processor : 8
processor : 10
processor : 12
processor : 14
processor : 16
processor : 18
processor : 20
processor : 22
processor : 24
processor : 26
processor : 28
processor : 30
root@bruce:/etc#
 
Last edited:
Hello,

I will probably ask a very noob question..
If I go with this 5.0 beta 1, how the upgrade will happen when the stable 5.0 release will be issued?
Will I need to reinstall, or will it be possible to just run an apt-get upgrade?

In other words, starting with this beta, will I end up after the stable release will be released to the same state as if I installed directly the stable release?

Thank you very much..

yes, the Beta will transform into the stable release over time, and regular upgrades mean you end up with the stable release once it has been released.
 
Hi Guys,

I've just run an upgrade from 4.4 to 5.0-6/669657df using the apt dist-upgrade method.

Everything seems to have run properly however LXC containers won't come up, I am getting a "cpu id '16' is out of range" error in the front end and am seeing the following in /var/log/pve/tasks/active for every LXC container I try to start:

Code:
UPID:bruce:00001E25:0001D077:58ED81E3:vzstart:100:root@pam: 1 58ED81E3 cpu id '16' is out of range
UPID:bruce:000017BC:0000ED3A:58ED7F9D:vzstart:100:root@pam: 1 58ED7F9D cpu id '16' is out of range
UPID:bruce:00001658:00009BC9:58ED7ECD:vzstart:100:root@pam: 1 58ED7ECD cpu id '16' is out of range

Code:
root@bruce:~# pveversion -v
proxmox-ve: 5.0-5 (running kernel: 4.10.5-1-pve)
pve-manager: 5.0-6 (running version: 5.0-6/669657df)
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.10.5-1-pve: 4.10.5-5
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.44-1-pve: 4.4.44-84
libpve-http-server-perl: 2.0-1
lvm2: 2.02.168-pve1
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-4
qemu-server: 5.0-2
pve-firmware: 2.0-2
libpve-common-perl: 5.0-7
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-3
libpve-storage-perl: 5.0-3
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.9.0-1~rc3
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
openvswitch-switch: 2.7.0-2
root@bruce:~#

I am also getting the following errors in /var/log/syslog

Code:
Apr 12 12:58:36 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range
Apr 12 12:58:46 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range
Apr 12 12:58:56 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range
Apr 12 12:59:06 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range
Apr 12 12:59:17 bruce pvestatd[4676]: lxc cpuset rebalance error: cpu id '16' is out of range

CPU ID 16 seems to exist as well

Code:
root@bruce:/etc# cat /proc/cpuinfo | grep processor
processor : 0
processor : 1
processor : 4
processor : 6
processor : 8
processor : 10
processor : 12
processor : 14
processor : 16
processor : 18
processor : 20
processor : 22
processor : 24
processor : 26
processor : 28
processor : 30
root@bruce:/etc#

could you open a new thread for this issue, and include "pct config 100" and "pct cpusets" output? thanks
 
Can see that you did a update on the beta today. Great work!!

Do you have a list of fixes/bugs that has been ironed out with this release?

you can check out the changelogs of the individual packages, but the main points are:
  • upgrade kernel to 4.10.5
  • upgrade qemu to 2.9-rc3
  • fix VMA bug (backup/restore)
  • fix start on boot bug
  • fix VM-internal reboot bug
  • various smaller fixes
 
You appear to have inherited a bug from the bonding driver. I'm trying to configure an MTU of 9000. Here's the kernel log:

Apr 11 19:05:05 kr20s3101 kernel: [ 6.748620] i40e 0000:01:00.0 eno1: changing MTU from 1500 to 9000
Apr 11 19:05:05 kr20s3101 kernel: [ 6.754990] i40e 0000:01:00.0 eno1: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None
Apr 11 19:05:05 kr20s3101 kernel: [ 6.827903] i40e 0000:01:00.1 eno2: changing MTU from 1500 to 9000
Apr 11 19:05:06 kr20s3101 kernel: [ 6.834179] i40e 0000:01:00.1 eno2: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None
Apr 11 19:05:06 kr20s3101 kernel: [ 6.899423] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Apr 11 19:05:06 kr20s3101 kernel: [ 6.902213] bond0: Setting xmit hash policy to layer3+4 (1)
Apr 11 19:05:06 kr20s3101 kernel: [ 6.902260] bond0: Setting MII monitoring interval to 100
Apr 11 19:05:06 kr20s3101 kernel: [ 6.911643] bond0: Adding slave eno1
Apr 11 19:05:06 kr20s3101 kernel: [ 6.911687] i40e 0000:01:00.0 eno1: changing MTU from 9000 to 1500
Apr 11 19:05:06 kr20s3101 kernel: [ 6.911700] i40e 0000:01:00.0 eno1: already using mac address 24:6e:96:00:dd:80
Apr 11 19:05:06 kr20s3101 kernel: [ 6.916891] bond0: Enslaving eno1 as a backup interface with an up link
Apr 11 19:05:06 kr20s3101 kernel: [ 6.924796] bond0: Adding slave eno2
Apr 11 19:05:06 kr20s3101 kernel: [ 6.924816] i40e 0000:01:00.1 eno2: changing MTU from 9000 to 1500
Apr 11 19:05:06 kr20s3101 kernel: [ 6.924829] i40e 0000:01:00.1 eno2: set new mac address 24:6e:96:00:dd:80
Apr 11 19:05:06 kr20s3101 kernel: [ 6.935510] bond0: Enslaving eno2 as a backup interface with an up link

After the boot completes, all of the interface MTUs are throttled down to 1500, as a result of the bonding bug. When attempting to manually set MTU, I see the following in the log:

Apr 11 19:13:28 kr20s3101 kernel: [ 508.793746] bond0: Invalid MTU 9000 requested, hw max 1500

Searching for that error lead me to this post on:

http s:// bugzilla.kernel. org/show_bug .cgi?id= 194763 (New User - can't post unbroken link - remove spaces)

Seems that the fix has been queued for 4.10-stable. I don't know if that's available yet, but you'll want to get this incorporated sooner than later. I imagine it affects a large number of users.
 
You appear to have inherited a bug from the bonding driver. I'm trying to configure an MTU of 9000. Here's the kernel log:

Apr 11 19:05:05 kr20s3101 kernel: [ 6.748620] i40e 0000:01:00.0 eno1: changing MTU from 1500 to 9000
Apr 11 19:05:05 kr20s3101 kernel: [ 6.754990] i40e 0000:01:00.0 eno1: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None
Apr 11 19:05:05 kr20s3101 kernel: [ 6.827903] i40e 0000:01:00.1 eno2: changing MTU from 1500 to 9000
Apr 11 19:05:06 kr20s3101 kernel: [ 6.834179] i40e 0000:01:00.1 eno2: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None
Apr 11 19:05:06 kr20s3101 kernel: [ 6.899423] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Apr 11 19:05:06 kr20s3101 kernel: [ 6.902213] bond0: Setting xmit hash policy to layer3+4 (1)
Apr 11 19:05:06 kr20s3101 kernel: [ 6.902260] bond0: Setting MII monitoring interval to 100
Apr 11 19:05:06 kr20s3101 kernel: [ 6.911643] bond0: Adding slave eno1
Apr 11 19:05:06 kr20s3101 kernel: [ 6.911687] i40e 0000:01:00.0 eno1: changing MTU from 9000 to 1500
Apr 11 19:05:06 kr20s3101 kernel: [ 6.911700] i40e 0000:01:00.0 eno1: already using mac address 24:6e:96:00:dd:80
Apr 11 19:05:06 kr20s3101 kernel: [ 6.916891] bond0: Enslaving eno1 as a backup interface with an up link
Apr 11 19:05:06 kr20s3101 kernel: [ 6.924796] bond0: Adding slave eno2
Apr 11 19:05:06 kr20s3101 kernel: [ 6.924816] i40e 0000:01:00.1 eno2: changing MTU from 9000 to 1500
Apr 11 19:05:06 kr20s3101 kernel: [ 6.924829] i40e 0000:01:00.1 eno2: set new mac address 24:6e:96:00:dd:80
Apr 11 19:05:06 kr20s3101 kernel: [ 6.935510] bond0: Enslaving eno2 as a backup interface with an up link

After the boot completes, all of the interface MTUs are throttled down to 1500, as a result of the bonding bug. When attempting to manually set MTU, I see the following in the log:

Apr 11 19:13:28 kr20s3101 kernel: [ 508.793746] bond0: Invalid MTU 9000 requested, hw max 1500

Searching for that error lead me to this post on:

http s:// bugzilla.kernel. org/show_bug .cgi?id= 194763 (New User - can't post unbroken link - remove spaces)

Seems that the fix has been queued for 4.10-stable. I don't know if that's available yet, but you'll want to get this incorporated sooner than later. I imagine it affects a large number of users.

yes, it is on our radar (also affects some of the intel and IB NICs, and used to affect OVS as well): https://bugzilla.proxmox.com/show_bug.cgi?id=1343
 
  • Like
Reactions: Alan Robertson
We are proud to announce the release of the first beta of our Proxmox VE 5.x family - based on the great Debian Stretch.

With the first beta we invite you to test your hardware and your upgrade path. The underlying Debian Stretch is already in a good shape and the 4.10 kernel performs outstandingly well. The 4.10 kernel for example allows running a Windows 2016 Hyper-V as a guest OS (nested virtualization).

This beta release provides already packages for Ceph Luminous v12.0.0.0 (dev), the basis for the next long-term Ceph release.

Whats next?
In the coming weeks we will integrate step by step new features into the beta, and we will fix all release critical bugs.

Download
https://www.proxmox.com/en/downloads

Alternate ISO download:
http://download.proxmox.com/iso/

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

FAQ
Q: Can I upgrade a current beta installation to the stable 5.0 release via apt?
A: Yes, upgrading from beta to stable installation will be possible via apt.

Q: Can I install Proxmox VE 5.0 beta on top of Debian Stretch?
A: Yes, see Install Proxmox V on Debian_Stretch

Q: Can I dist-upgrade Proxmox VE 4.4 to 5.0 beta with apt dist-upgrade?
A: Yes, you can.

Q: Which repository can i use for Proxmox VE 5.0 beta?
A: deb http://download.proxmox.com/debian/pve stretch pvetest

Q: When do you expect the stable Proxmox VE release?
A: The final Proxmox VE 5.0 will be available as soon as Debian Stretch is stable, and all release critical bugs are fixed (May 2017 or later).

Q: Where can I get more information about feature updates?
A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

Please help us reaching the final release date by testing this beta and by providing feedback.

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader

Any plan to fix the issue when the server has multiple NICs? Even when it gets the ip address from the gateway DHCP server, after the installation is completed it cannot ping the gateway. In my case I can use only eno3.
 
Last edited:
yes, the Beta will transform into the stable release over time, and regular upgrades mean you end up with the stable release once it has been released.
Thank you for your quick reply...
Ok, I'll go with the 5.0 beta..
Do the sources files (for apt) need to be changed to a stable repo?
I also see that some people wrote about a "dist-upgrade"?
In my case, will it be necessary to run a dist-uppgrade, or regular apt-get upgrade will it be enough?

Thank you again...
And thanks for the great job...
 
Thank you for your quick reply...
Ok, I'll go with the 5.0 beta..
Do the sources files (for apt) need to be changed to a stable repo?
Hi,
yes - of course.
I also see that some people wrote about a "dist-upgrade"?
In my case, will it be necessary to run a dist-uppgrade, or regular apt-get upgrade will it be enough?
proxmox is an rolling release - use allways dist-upgrade!!

Udo
 
could you give the exact error message? it seems like the new beta installer is less forgiving when handling old cruft on the disks, we'll probably add extra cleanup steps for some of the more common scenarios..

I have the same issue. I got it working on my Samsung 960 (NVME) after a few installation tries fiddling with bios settings. After that sucessfull installation it now gives the same error message again when i try to reinstall no matter if i try a Sata SSD or my Samsung 960. I tried to format the harddrive for a super clean reinstall but nothing helps.

The error message is:

Warning: Failed to connect lvemetad. Falling back to device scanning method.
Can't open /dev/nvme0n1p3 exclusively. Mounted filesystem?
 

Virtual Environment 4.3-12/6894c9d9

Registered repository version 5


dpkg: dependency problems prevent configuration of proxmox-ve:
proxmox-ve depends on pve-manager; however:
Package pve-manager is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-firewall
pve-manager
proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)
Failed to perform requested operation on package. Trying to recover:
Setting up pve-firewall (3.0-1) ...
Job for pve-firewall.service failed. See 'systemctl status pve-firewall.service' and 'journalctl -xn' for
dpkg: error processing package pve-firewall (--configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of pve-manager:
pve-manager depends on pve-firewall; however:
Package pve-firewall is not configured yet.

dpkg: error processing package pve-manager (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve:
proxmox-ve depends on pve-manager; however:
Package pve-manager is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-firewall
pve-manager
proxmox-ve

Current status: 20 updates [-1].
 
Virtual Environment 4.3-12/6894c9d9

Registered repository version 5


dpkg: dependency problems prevent configuration of proxmox-ve:
proxmox-ve depends on pve-manager; however:
Package pve-manager is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-firewall
pve-manager
proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)
Failed to perform requested operation on package. Trying to recover:
Setting up pve-firewall (3.0-1) ...
Job for pve-firewall.service failed. See 'systemctl status pve-firewall.service' and 'journalctl -xn' for
dpkg: error processing package pve-firewall (--configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of pve-manager:
pve-manager depends on pve-firewall; however:
Package pve-firewall is not configured yet.

dpkg: error processing package pve-manager (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve:
proxmox-ve depends on pve-manager; however:
Package pve-manager is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-firewall
pve-manager
proxmox-ve

Current status: 20 updates [-1].

please open a new thread and include the commands you ran and the system log from around the update
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!