Proxmox VE 5.0 beta2 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
716
1,132
144
We are proud to announce the release of the second beta of our Proxmox VE 5.x family. Based on the feedback from the first beta two months ago we improved a lot on the ISO installer and of course, on almost all other places.

Whats next?
In the coming weeks we will integrate step by step new features into the beta, and we will fix all release critical bugs.

Download
https://www.proxmox.com/en/downloads

Alternate ISO download:
http://download.proxmox.com/iso/

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

FAQ
Q: Can I upgrade a current beta installation to the stable 5.0 release via apt?
A: Yes, upgrading from beta to stable installation will be possible via apt.

Q: Can I upgrade a current beta installation to latest beta via apt?
A: Yes, upgrading from beta is possible via apt, see Update_a_running_Proxmox_Virtual_Environment_5.x_beta_to_latest_5.x_beta

Q: Which repository can i use for Proxmox VE 5.0 beta?
A: deb http://download.proxmox.com/debian/pve stretch pvetest

Q: Can I install Proxmox VE 5.0 beta on top of Debian Stretch?
A: Yes, see Install Proxmox V on Debian_Stretch

Q: Can I dist-upgrade Proxmox VE 4.4 to 5.0 beta with apt dist-upgrade?
A: Yes, you can. See Upgrade_from_4.x_to_5.0

Q: When do you expect the stable Proxmox VE release?
A: The final Proxmox VE 5.0 will be available as soon as Debian Stretch is stable, and all release critical bugs are fixed (May 2017 or later).

Q: Where can I get more information about feature updates?
A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

Please help us reaching the final release date by testing this beta and by providing feedback.

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 

vooze

Member
May 11, 2017
77
20
8
34
Can't wait for final 5.0 :) Thank you for your hard work. I'm trying my best to push Proxmox at work ;)
 
  • Like
Reactions: fireon

micro

Active Member
Nov 28, 2014
60
19
28
The disk cleanup on install now removes correctly the old stuff. No need to do manually dd if=/dev/zero of=/old/disk before reinstall anymore. Thank you guys ;-)
 
Apr 19, 2017
4
0
6
Phoenix, AZ
Any chance this release includes updated mpt3sas drivers? (e.g. 15.00.00.00)

EDIT: FYI -- Loaded it up quick to take a peak -- looks like it includes 14.101.00.00. Any luck/chance we could get the included mpt3sas driver bumped up to latest prior to final release?
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,483
1,395
164
Any chance this release includes updated mpt3sas drivers? (e.g. 15.00.00.00)

EDIT: FYI -- Loaded it up quick to take a peak -- looks like it includes 14.101.00.00. Any luck/chance we could get the included mpt3sas driver bumped up to latest prior to final release?

that's the version included in 4.10.. any particular reason why you need a newer one?
 

GadgetPig

Member
Apr 26, 2016
138
24
18
52
Just wanted to report a possible Microsoft Edge browser bug with proxmox. When using Windows Edge browser (latest creators update) and uploading an ISO file, the % uploaded does not update, it stays at 0% (even though it continues to upload in background). It works fine on other browsers. thanks!
 

LDN

New Member
May 28, 2017
2
0
1
France
loup-des-neiges.com
I installed the beta 2 on a server that works perfectly with version 4.4,
I notice a very large consumption of memory (No VM start, fresh install).

"KVM" use 9Go ram :eek:
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,483
1,395
164
I installed the beta 2 on a server that works perfectly with version 4.4,
I notice a very large consumption of memory (No VM start, fresh install).

"KVM" use 9Go ram :eek:

if you have a kvm process, that means a VM is running..
 

sofakng

New Member
Dec 26, 2015
14
0
1
41
I've just installed Proxmox v5.0 beta 2 and I'm getting a few errors/warnings on boot:

Code:
BERT: Can't request iomem region <...-...>.
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Error: Driver 'pcspkr' is already registered, aborting...
[FAILED] Failed to start Set console font and keymap.

The BERT error appears to require a kernel patch (???) for my Supermicro X9SCM motherboard.

The lvmetad seems OK to ignore but can it be safely disabled modifying /etc/lvm/lvm.conf?

All of the messages appear to be harmless since everything is working but I'd prefer to have no errors/warnings during boot if possible :)
 

fireon

Famous Member
Oct 25, 2010
3,870
316
103
40
Austria/Graz
iteas.at
It is better it takes more time for finals release. Better more testing and stable release :) . So no stress...
 

sofakng

New Member
Dec 26, 2015
14
0
1
41
I'm trying to use the Ubuntu 17.04 LXC template but when creating the container it says unsupported Ubuntu version '17.04' ?
 

sofakng

New Member
Dec 26, 2015
14
0
1
41
Got it. Thanks!

Any idea when the new pve-container will be pushed to the stable/subscription repo?
 

Instigater

Member
Aug 27, 2015
17
1
23
Latvia
Probably not related to proxmox but to kernel. LVM disk starts to flock when VM disk is tried to be deleted. Setup is Proxmox Blade and FCoE to 3Par 8200 utilizing 4 paths over FCoE.

Syslog:
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.470852] sd 1:0:0:0: alua: port group 01 state A preferred supports tolusnA
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.471026] sd 1:0:0:0: alua: port group 01 state A preferred supports tolusnA
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.474640] blk_cloned_rq_check_limits: over max size limit.
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.476590] device-mapper: multipath: Failing path 8:112.
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.476612] blk_cloned_rq_check_limits: over max size limit.
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.478688] device-mapper: multipath: Failing path 8:80.
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.478700] blk_cloned_rq_check_limits: over max size limit.
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.480765] device-mapper: multipath: Failing path 8:48.
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.480785] blk_cloned_rq_check_limits: over max size limit.
Jun 5 15:43:42 prox-prod-dc1-srv06 kernel: [ 298.482813] device-mapper: multipath: Failing path 8:16.
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: sdb: mark as failed
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: remaining active paths: 3
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: sdf: mark as failed
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: remaining active paths: 2
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: sdd: mark as failed
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: remaining active paths: 1
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: sdh: mark as failed
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: Entering recovery mode: max_retries=18
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: remaining active paths: 0
Jun 5 15:43:43 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: Entering recovery mode: max_retries=18
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: 8:16: reinstated
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: queue_if_no_path enabled
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: Recovered to normal mode
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: remaining active paths: 1
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: 8:48: reinstated
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: remaining active paths: 2
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: 8:80: reinstated
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: remaining active paths: 3
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.464705] device-mapper: multipath: Reinstating path 8:16.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.465296] device-mapper: multipath: Reinstating path 8:48.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.466107] device-mapper: multipath: Reinstating path 8:80.
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: 8:112: reinstated
Jun 5 15:43:44 prox-prod-dc1-srv06 multipathd[780]: proxmox.SSD.dedup.0: remaining active paths: 4
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.467298] device-mapper: multipath: Reinstating path 8:112.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.474961] sd 1:0:0:0: alua: port group 01 state A preferred supports tolusnA
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.475266] sd 1:0:0:0: alua: port group 01 state A preferred supports tolusnA
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.478751] blk_cloned_rq_check_limits: over max size limit.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.480794] device-mapper: multipath: Failing path 8:112.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.480816] blk_cloned_rq_check_limits: over max size limit.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.482894] device-mapper: multipath: Failing path 8:80.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.482907] blk_cloned_rq_check_limits: over max size limit.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.484971] device-mapper: multipath: Failing path 8:48.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.484981] blk_cloned_rq_check_limits: over max size limit.
Jun 5 15:43:44 prox-prod-dc1-srv06 kernel: [ 300.487087] device-mapper: multipath: Failing path 8:16.

/etc/multipath.conf
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(td|hd)[a-z]"
devnode "^dcssblk[0-9]*"
devnode "^cciss!c[0-9]d[0-9]*"
device {
vendor "DGC"
product "LUNZ"
}
device {
vendor "EMC"
product "LUNZ"
}
device {
vendor "IBM"
product "Universal Xport"
}
device {
vendor "IBM"
product "S/390.*"
}
device {
vendor "DELL"
product "Universal Xport"
}
device {
vendor "SGI"
product "Universal Xport"
}
device {
vendor "STK"
product "Universal Xport"
}
device {
vendor "SUN"
product "Universal Xport"
}
device {
vendor "(NETAPP|LSI|ENGENIO)"
product "Universal Xport"
}
}
blacklist_exceptions {
wwid "360002ac0000000000000004c0001a365"
}
multipaths {
multipath {
wwid "360002ac0000000000000004c0001a365"
alias proxmox.SSD.dedup.0
}
}

LVM data:
root@prox-prod-dc1-srv06:~# multipath -ll
mpathc (360002ac000000000000000330001a365) dm-3 3PARdata,VV
size=2.0T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:8 sdc 8:32 active ready running
|- 2:0:0:8 sdg 8:96 active ready running
|- 1:0:1:8 sde 8:64 active ready running
`- 2:0:1:8 sdi 8:128 active ready running
proxmox.SSD.dedup.0 (360002ac0000000000000004c0001a365) dm-2 3PARdata,VV
size=2.0T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:0 sdb 8:16 active ready running
|- 2:0:0:0 sdf 8:80 active ready running
|- 1:0:1:0 sdd 8:48 active ready running
`- 2:0:1:0 sdh 8:112 active ready running


root@prox-prod-dc1-srv06:~# lvdisplay
WARNING: Not using lvmetad because duplicate PVs were found.
WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID xBqciY-pilY-6U9o-1GMZ-5K1I-iTqa-qQ0Q6Z
LV Write Access read/write
LV Creation host, time proxmox, 2017-06-05 11:32:35 +0300
LV Status available
# open 2
LV Size 32,00 GiB
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID x2LyLp-PeEU-9l2c-wjz3-6INf-CVfM-z7b9ko
LV Write Access read/write
LV Creation host, time proxmox, 2017-06-05 11:32:35 +0300
LV Status available
# open 1
LV Size 96,00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID byyl5P-la75-13Q9-OZn6-HWKc-hqEv-qjznJy
LV Write Access read/write
LV Creation host, time proxmox, 2017-06-05 11:32:36 +0300
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 2
LV Size 1,50 TiB
Allocated pool data 1,00%
Allocated metadata 0,91%
Current LE 392303
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7

--- Logical volume ---
LV Path /dev/pve/vm-100-disk-1
LV Name vm-100-disk-1
VG Name pve
LV UUID DVZsPB-xPOG-Jk0t-fA9h-MKKJ-Txoh-v4LOcd
LV Write Access read/write
LV Creation host, time prox-prod-dc1-srv06, 2017-06-05 14:35:23 +0300
LV Pool name data
LV Status available
# open 0
LV Size 32,00 GiB
Mapped size 47,66%
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9

--- Logical volume ---
LV Path /dev/proxmox.SSD.dedup/vm-100-disk-1
LV Name vm-100-disk-1
VG Name proxmox.SSD.dedup
LV UUID zLyXZw-F1Jz-m6X8-bYWg-xpOi-fB9Q-CYJQZx
LV Write Access read/write
LV Creation host, time prox-prod-dc1-srv06, 2017-06-05 14:46:06 +0300
LV Status available
# open 0
LV Size 1,99 TiB
Current LE 522240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:10



root@prox-prod-dc1-srv06:~# vgdisplay
WARNING: Not using lvmetad because duplicate PVs were found.
WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 13
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 1,64 TiB
PE Size 4,00 MiB
Total PE 429166
Alloc PE / Size 425119 / 1,62 TiB
Free PE / Size 4047 / 15,81 GiB
VG UUID GTE8Bt-1jxB-MDkd-OAy2-VVtQ-ecH4-e5o311

--- Volume group ---
VG Name proxmox.SSD.dedup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2,00 TiB
PE Size 4,00 MiB
Total PE 524284
Alloc PE / Size 522240 / 1,99 TiB
Free PE / Size 2044 / 7,98 GiB
VG UUID 9PmpRW-nkVi-SS35-3s2Z-UPni-qoC6-3Wx4zK



root@prox-prod-dc1-srv06:~# pvdisplay
WARNING: Not using lvmetad because duplicate PVs were found.
WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 1,64 TiB / not usable 3,19 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 429166
Free PE 4047
Allocated PE 425119
PV UUID DSNcsy-gxx7-5brF-Cd88-W2bg-rUPo-cQJWIi

--- Physical volume ---
PV Name /dev/mapper/proxmox.SSD.dedup.0
VG Name proxmox.SSD.dedup
PV Size 2,00 TiB / not usable 0
Allocatable yes
PE Size 4,00 MiB
Total PE 524284
Free PE 2044
Allocated PE 522240
PV UUID 5eZlEx-PoaR-xLGX-FQc5-5a5i-5MKV-5Xdl8x
 

alexjhart

Member
May 5, 2016
18
8
23
35
As reported on Ceph tracker (#15569):

Ceph 12.0.2-pve1 has redundant logrotate scripts, which result in this daily email:
/etc/cron.daily/logrotate:
error: ceph-common:1 duplicate log entry for /var/log/ceph/ceph.audit.log

# dpkg -S ceph | grep logrotate
ceph-common: /etc/logrotate.d/ceph-common
ceph-base: /etc/logrotate.d/ceph

# diff -u /etc/logrotate.d/ceph-common /etc/logrotate.d/ceph
--- /etc/logrotate.d/ceph-common 2016-12-09 12:09:00.000000000 -0800
+++ /etc/logrotate.d/ceph 2017-04-26 01:20:20.000000000 -0700
@@ -4,7 +4,7 @@
compress
sharedscripts
postrotate
- killall -q -1 ceph-mon ceph-mds ceph-osd ceph-fuse radosgw || true
+ killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw || true
endscript
missingok
notifempty
 
  • Like
Reactions: chrone

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,483
1,395
164
As reported on Ceph tracker (#15569):

Ceph 12.0.2-pve1 has redundant logrotate scripts, which result in this daily email:

Our Ceph packages don't have redundant logrotate scripts ;) The ones in Debian and the ones provided by the Ceph project (which ours are based on) use different Debian packaging - one has the logrotate in ceph-common, the other in ceph-base, and neither cares about the others' quirks. Since our pve-qemu-kvm package depends on ceph-common, you initially get the ceph-common package from Debian Stretch installed. When you then install the Ceph packages from our repository (e.g., via "pveceph install"), the old logrotate script is not cleaned up because it is a conffile. We'll include some kind of cleanup in our next Luminous packages (based on 12.0.3).
 
  • Like
Reactions: chrone

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!