Proxmox VE 5.0 beta1 released!

Thanks for providing the Luminous repository.
I had installed Luminous 12.0.1 from the ceph repositories to my PVE4.4 cluster before I upgraded to PVE5.0Beta on one machine.
Unfortunately the OSD's did not restart after the reboot, so I decided to downgrade to the PVE Luminous repository (12.0.0) fabian provided the link for.
Now the mon an well as the osd's come up but can't join the ceph cluster due to some mismatch.
Is there a git repository with the patches you applied to Luminous 12.0.0 when creating the ceph-luminous repository to allow me to patch and recompile Luminous 12.0.1?
 
the console disconnects every few seconds. other things seem to have worked fine like restoring a VM from backup and everything. for the console i have tested with firefox and chrome. longest connection i had was about 10 seconds. it will disconnect and reconect about 3 times then fail with code 1006. this is on a local network and there does not seem to be network congestion.

NOTE: it was the dang cookie. sorry for the confusion. it was very late. i ended up installing 4.4 and still having the issue before i remembered to wipe the auth cookie.
 
Last edited:
Thanks for providing the Luminous repository.
I had installed Luminous 12.0.1 from the ceph repositories to my PVE4.4 cluster before I upgraded to PVE5.0Beta on one machine.
Unfortunately the OSD's did not restart after the reboot, so I decided to downgrade to the PVE Luminous repository (12.0.0) fabian provided the link for.
Now the mon an well as the osd's come up but can't join the ceph cluster due to some mismatch.
Is there a git repository with the patches you applied to Luminous 12.0.0 when creating the ceph-luminous repository to allow me to patch and recompile Luminous 12.0.1?

sorry, seems like that was not mirrored. the git repository should be up soon at https://git.proxmox.com/?p=ceph.git;a=tree
updated packages for 12.0.1 are also available via APT repositories now.

could you open a new thread with the exact error messages you encounter? thanks!
 
Hi.

I have installed 5 Beta on a spare server. Love it. :)

One thing, LXC containers do not autostart even although I have set onboot=1, both in the GUI and on the command line using the pct.

I had issue with the zfs.conf being ignored but updating initramfs fixed that.

EDIT : KVM vms do not auto start either. The task log shows this

Can't call method "has_lock" on an undefined value at /usr/share/perl5/PVE/API2/Nodes.pm line 1300.
Can't call method "has_lock" on an undefined value at /usr/share/perl5/PVE/API2/Nodes.pm line 1300.
Can't call method "has_lock" on an undefined value at /usr/share/perl5/PVE/API2/Nodes.pm line 1300.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1382.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1386.
Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Nodes.pm line 1392.
unknown VM type ''
Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1382.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1386.
Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Nodes.pm line 1392.
unknown VM type ''
Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1382.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1386.
Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Nodes.pm line 1392.
unknown VM type ''
TASK OK
 
Last edited:
Hi All,
I tried to install Proxmox VE 5.0 beta1 on PowerEdge FC430 and SD CARD 32GB. Under the error during installation.
Proxmox VE 4.4 gives not mistake and you install!

proxmox2.jpg
proxmox.jpg

Any ideas?

Best Regards
Roberto
 
Except from the proxmox repo there are two more repos in sources.list:

Do we leave them as they are or do they also need to point to stretch?

AFAIK yes they need to be changed from "jessie" to "stretch".

I recommend doing the upgrade using putty on a remote PC, instead of shell console within web gui. This way you can scroll back and view session/upgrade process.

Before changing repository, make sure to "apt-get update && apt-get dist-upgrade" and update ProxMox FIRST and then reboot. Then backup your VM's somewhere, and after changing repository, do "apt-get update && apt-get dist-upgrade" to update to ProxMox 5 beta. During the upgrade it'll open text file about changes, press "Q" to exit out, and answer a couple prompts about keeping existing config files, or accepting the maintainer's version.

I did everything listed above and everything upgraded fine. Only kink? I noticed during the upgrade process, it wanted to delete various directories, but couldn't because it wasn't empty. But the process kept going and everything turned out ok.

One minor issue I had, we use OpenDNS family on our router, and for some reason it wouldn't resolve download.proxmox.com. I had to add Google DNS 8.8.8.8 and Level 3 DNS 4.2.2.2 in ProxMox DNS settings, before it resolved properly.

Good luck!
 
Hi All,
I tried to install Proxmox VE 5.0 beta1 on PowerEdge FC430 and SD CARD 32GB. Under the error during installation.
Proxmox VE 4.4 gives not mistake and you install!
Any ideas?
Best Regards
Roberto

Did you try to install ProxMox using a USB or flash card? If so, it may not work. YMMV, but best to use CD disc to install.
For some reason, installing from USB flash drive works on some computers, but not on others (HP Z200 workstation for example).
 
  • Like
Reactions: Brononius
Hi All,
I tried to install Proxmox VE 5.0 beta1 on PowerEdge FC430 and SD CARD 32GB. Under the error during installation.
Proxmox VE 4.4 gives not mistake and you install!

View attachment 5072
View attachment 5073

Any ideas?

Best Regards
Roberto

use a clean disk (e.g., remove any existing VGs and PVs, mdraid or zpool labels, and clean the partition table afterwards). this will be handled better in the next iteration of the installer.
 
Installer failes on creating swap when using ZFS RaidZ1 and then aborts. I'm using the 4.4 Installer and doing an dist-upgrade to strech, this works fine.
 
could you post the complete error message please? the image is cut off.. feel free to open a new thread.. if you boot in debug mode, you should be able to retrieve the whole installer log from /tmp/

I just met the same error, installing PVE 5.0 beta over a previous 4.4 install, so not a clean disk. I have the full screenshot. It was on a Dell PE R630, /dev/sda being an SSD disk.
 

Attachments

  • installation_error_sda3_pve5beta.png
    installation_error_sda3_pve5beta.png
    240.9 KB · Views: 15
It seems to me that the status of BETA has been assigned a bit early. It's more like ALPHA. I installed it on my home server and when I try to install the openmediavault into the KVM machine, the server goes into a reboot. And so hard, as if someone pressed the reset. There is nothing in the logs. At 4.4, everything is set well.

In 5.0, when you download, it swears:
Code:
[    1.093094] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
[    1.093132] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT0._GTF] (Node ffff8cd8400cc988), AE_NOT_FOUND (20160930/psparse-543)
[    1.101338] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
[    1.101375] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT5._GTF] (Node ffff8cd8400cc7f8), AE_NOT_FOUND (20160930/psparse-543)
[    1.162970] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
[    1.163007] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT0._GTF] (Node ffff8cd8400cc988), AE_NOT_FOUND (20160930/psparse-543)
[    1.173535] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
[    1.173571] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT5._GTF] (Node ffff8cd8400cc7f8), AE_NOT_FOUND (20160930/psparse-543)
[   12.319188] ACPI Warning: SystemIO range 0x0000000000000428-0x000000000000042F conflicts with OpRegion 0x0000000000000400-0x000000000000047F (\PMIO) (20160930/utaddress-247)
[   12.319193] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[   12.319195] ACPI Warning: SystemIO range 0x0000000000000540-0x000000000000054F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)
[   12.319197] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[   12.319198] ACPI Warning: SystemIO range 0x0000000000000530-0x000000000000053F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)
[   12.319200] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[   12.319201] ACPI Warning: SystemIO range 0x0000000000000500-0x000000000000052F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)

And in 4.4 there are no such curses. I'm depressed.
 
Installing on a Supermicro X11-ssi-ln4f motherboard, using the ISO mounted through the IPMI interface. Installation fails at 100%. The detail log shows:

Errors were encountered while processing:
postfix
bsd-mailx
pve-manager
proxmox-ve
command 'chroot /target dpkg --force-confold --configure -a' failed with exit code 1 at /usr/bin/proxinstall line 385

This happened first on a ZFS 2-drive mirror and then with installation on a single drive with ext4.
 
It seems to me that the status of BETA has been assigned a bit early. It's more like ALPHA. I installed it on my home server and when I try to install the openmediavault into the KVM machine, the server goes into a reboot. And so hard, as if someone pressed the reset. There is nothing in the logs. At 4.4, everything is set well.

at what point? if this error is reproducible, please open a new thread or bug report and we can figure out steps to debug this.

In 5.0, when you download, it swears:
Code:
[    1.093094] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
[    1.093132] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT0._GTF] (Node ffff8cd8400cc988), AE_NOT_FOUND (20160930/psparse-543)
[    1.101338] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
[    1.101375] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT5._GTF] (Node ffff8cd8400cc7f8), AE_NOT_FOUND (20160930/psparse-543)
[    1.162970] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
[    1.163007] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT0._GTF] (Node ffff8cd8400cc988), AE_NOT_FOUND (20160930/psparse-543)
[    1.173535] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
[    1.173571] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT5._GTF] (Node ffff8cd8400cc7f8), AE_NOT_FOUND (20160930/psparse-543)
[   12.319188] ACPI Warning: SystemIO range 0x0000000000000428-0x000000000000042F conflicts with OpRegion 0x0000000000000400-0x000000000000047F (\PMIO) (20160930/utaddress-247)
[   12.319193] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[   12.319195] ACPI Warning: SystemIO range 0x0000000000000540-0x000000000000054F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)
[   12.319197] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[   12.319198] ACPI Warning: SystemIO range 0x0000000000000530-0x000000000000053F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)
[   12.319200] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[   12.319201] ACPI Warning: SystemIO range 0x0000000000000500-0x000000000000052F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)

And in 4.4 there are no such curses. I'm depressed.

AFAICT, this is just the newer kernel making more noise about (potentially) buggy BIOS implementations. I get similar messages on my Skylake work station, with no ill effects.
 
Installing on a Supermicro X11-ssi-ln4f motherboard, using the ISO mounted through the IPMI interface. Installation fails at 100%. The detail log shows:

Errors were encountered while processing:
postfix
bsd-mailx
pve-manager
proxmox-ve
command 'chroot /target dpkg --force-confold --configure -a' failed with exit code 1 at /usr/bin/proxinstall line 385

This happened first on a ZFS 2-drive mirror and then with installation on a single drive with ext4.

please try booting in debug mode and open a new thread with the contents of /tmp/install.log after the installation has failed.
 
Installer failes on creating swap when using ZFS RaidZ1 and then aborts. I'm using the 4.4 Installer and doing an dist-upgrade to strech, this works fine.

could you please open a new thread with the complete error message from the debug log?
 
Hi I just install Proxmox 5 Beta 1 for the first time. I try a few virtualisation software including vmware and they all not working on the new AMD CPU (Ryzen). To boot linux on that CPU we need a ressent Kernel. Now my test Lab is up and running.
 
Have just built a 4 node cluster on Beta5, with Ceph (luminous)... I'm having live migration issues.

Initially, I was able to live migrate a VM from Node1 to Node2, and all went well...

When attempting to move it back from Node2 to Node1, I'm getting the following output:

Apr 10 13:42:31 starting migration of VM 100 to node 'Node1' (10.0.0.11)
Apr 10 13:42:31 copying disk images
Apr 10 13:42:31 starting VM 100 on remote node 'Node1'
Apr 10 13:42:33 start remote tunnel
Apr 10 13:42:33 starting online/live migration on unix:/run/qemu-server/100.migrate
Apr 10 13:42:33 migrate_set_speed: 8589934592
Apr 10 13:42:33 migrate_set_downtime: 0.1
Apr 10 13:42:33 set migration_caps
Apr 10 13:42:33 set cachesize: 107374182
Apr 10 13:42:33 start migrate command to unix:/run/qemu-server/100.migrate
channel 2: open failed: administratively prohibited: open failed

Apr 10 13:42:35 migration status error: failed
Apr 10 13:42:35 ERROR: online migrate failure - aborting
Apr 10 13:42:35 aborting phase 2 - cleanup resources
Apr 10 13:42:35 migrate_cancel
Apr 10 13:42:37 ERROR: migration finished with problems (duration 00:00:06)
TASK ERROR: migration problems

----
UPDATE:

These were fresh .ISO installs. After updating the systems (apt-get update / apt-get install), the problem is resolved... SSHD was among the packages updated during the refresh.

Post update, everything is mow working as expected.
 
Last edited:
I just upgraded to proxmox 5 from 4.4. Start/Shutdown all outputs this: pastebin.com/2RW3tkB5 (can't do bulk actions, tried to reboot. no luck)
pveversion -v outputs: pastebin.com/exXDBDVZ

Followed the exact update/upgrade guide on wiki.
 
I just upgraded to proxmox 5 from 4.4. Start/Shutdown all outputs this: pastebin.com/2RW3tkB5 (can't do bulk actions, tried to reboot. no luck)
pveversion -v outputs: pastebin.com/exXDBDVZ

Followed the exact update/upgrade guide on wiki.

known bug, will be fixed with the next round of updates (probably later today)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!