Proxmox VE 5.0 beta1 released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Mar 22, 2017.

  1. speleo

    speleo New Member

    Joined:
    Apr 2, 2017
    Messages:
    1
    Likes Received:
    0
    Thanks for providing the Luminous repository.
    I had installed Luminous 12.0.1 from the ceph repositories to my PVE4.4 cluster before I upgraded to PVE5.0Beta on one machine.
    Unfortunately the OSD's did not restart after the reboot, so I decided to downgrade to the PVE Luminous repository (12.0.0) fabian provided the link for.
    Now the mon an well as the osd's come up but can't join the ceph cluster due to some mismatch.
    Is there a git repository with the patches you applied to Luminous 12.0.0 when creating the ceph-luminous repository to allow me to patch and recompile Luminous 12.0.1?
     
  2. zedicus

    zedicus New Member

    Joined:
    Mar 5, 2014
    Messages:
    22
    Likes Received:
    4
    the console disconnects every few seconds. other things seem to have worked fine like restoring a VM from backup and everything. for the console i have tested with firefox and chrome. longest connection i had was about 10 seconds. it will disconnect and reconect about 3 times then fail with code 1006. this is on a local network and there does not seem to be network congestion.

    NOTE: it was the dang cookie. sorry for the confusion. it was very late. i ended up installing 4.4 and still having the issue before i remembered to wipe the auth cookie.
     
    #42 zedicus, Apr 3, 2017
    Last edited: Apr 3, 2017
  3. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    sorry, seems like that was not mirrored. the git repository should be up soon at https://git.proxmox.com/?p=ceph.git;a=tree
    updated packages for 12.0.1 are also available via APT repositories now.

    could you open a new thread with the exact error messages you encounter? thanks!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. Bards

    Bards New Member

    Joined:
    Apr 4, 2017
    Messages:
    1
    Likes Received:
    0
    Hi.

    I have installed 5 Beta on a spare server. Love it. :)

    One thing, LXC containers do not autostart even although I have set onboot=1, both in the GUI and on the command line using the pct.

    I had issue with the zfs.conf being ignored but updating initramfs fixed that.

    EDIT : KVM vms do not auto start either. The task log shows this

    Can't call method "has_lock" on an undefined value at /usr/share/perl5/PVE/API2/Nodes.pm line 1300.
    Can't call method "has_lock" on an undefined value at /usr/share/perl5/PVE/API2/Nodes.pm line 1300.
    Can't call method "has_lock" on an undefined value at /usr/share/perl5/PVE/API2/Nodes.pm line 1300.
    Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1382.
    Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1386.
    Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Nodes.pm line 1392.
    unknown VM type ''
    Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1382.
    Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1386.
    Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Nodes.pm line 1392.
    unknown VM type ''
    Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1382.
    Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1386.
    Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Nodes.pm line 1392.
    unknown VM type ''
    TASK OK
     
    #44 Bards, Apr 4, 2017
    Last edited: Apr 4, 2017
  5. Roberto Legname

    Roberto Legname New Member

    Joined:
    Apr 4, 2017
    Messages:
    2
    Likes Received:
    0
    Hi All,
    I tried to install Proxmox VE 5.0 beta1 on PowerEdge FC430 and SD CARD 32GB. Under the error during installation.
    Proxmox VE 4.4 gives not mistake and you install!

    proxmox2.jpg
    proxmox.jpg

    Any ideas?

    Best Regards
    Roberto
     
  6. GadgetPig

    GadgetPig Member

    Joined:
    Apr 26, 2016
    Messages:
    138
    Likes Received:
    19
    AFAIK yes they need to be changed from "jessie" to "stretch".

    I recommend doing the upgrade using putty on a remote PC, instead of shell console within web gui. This way you can scroll back and view session/upgrade process.

    Before changing repository, make sure to "apt-get update && apt-get dist-upgrade" and update ProxMox FIRST and then reboot. Then backup your VM's somewhere, and after changing repository, do "apt-get update && apt-get dist-upgrade" to update to ProxMox 5 beta. During the upgrade it'll open text file about changes, press "Q" to exit out, and answer a couple prompts about keeping existing config files, or accepting the maintainer's version.

    I did everything listed above and everything upgraded fine. Only kink? I noticed during the upgrade process, it wanted to delete various directories, but couldn't because it wasn't empty. But the process kept going and everything turned out ok.

    One minor issue I had, we use OpenDNS family on our router, and for some reason it wouldn't resolve download.proxmox.com. I had to add Google DNS 8.8.8.8 and Level 3 DNS 4.2.2.2 in ProxMox DNS settings, before it resolved properly.

    Good luck!
     
  7. GadgetPig

    GadgetPig Member

    Joined:
    Apr 26, 2016
    Messages:
    138
    Likes Received:
    19
    Did you try to install ProxMox using a USB or flash card? If so, it may not work. YMMV, but best to use CD disc to install.
    For some reason, installing from USB flash drive works on some computers, but not on others (HP Z200 workstation for example).
     
    Brononius likes this.
  8. morph027

    morph027 Active Member

    Joined:
    Mar 22, 2013
    Messages:
    424
    Likes Received:
    52
  9. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    use a clean disk (e.g., remove any existing VGs and PVs, mdraid or zpool labels, and clean the partition table afterwards). this will be handled better in the next iteration of the installer.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. efeu

    efeu Member
    Proxmox Subscriber

    Joined:
    Nov 6, 2015
    Messages:
    74
    Likes Received:
    6
    Installer failes on creating swap when using ZFS RaidZ1 and then aborts. I'm using the 4.4 Installer and doing an dist-upgrade to strech, this works fine.
     
  11. alain

    alain Member

    Joined:
    May 17, 2009
    Messages:
    208
    Likes Received:
    0
    I just met the same error, installing PVE 5.0 beta over a previous 4.4 install, so not a clean disk. I have the full screenshot. It was on a Dell PE R630, /dev/sda being an SSD disk.
     

    Attached Files:

  12. kolombo

    kolombo New Member

    Joined:
    Feb 18, 2013
    Messages:
    8
    Likes Received:
    0
    It seems to me that the status of BETA has been assigned a bit early. It's more like ALPHA. I installed it on my home server and when I try to install the openmediavault into the KVM machine, the server goes into a reboot. And so hard, as if someone pressed the reset. There is nothing in the logs. At 4.4, everything is set well.

    In 5.0, when you download, it swears:
    Code:
    [    1.093094] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
    [    1.093132] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT0._GTF] (Node ffff8cd8400cc988), AE_NOT_FOUND (20160930/psparse-543)
    [    1.101338] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
    [    1.101375] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT5._GTF] (Node ffff8cd8400cc7f8), AE_NOT_FOUND (20160930/psparse-543)
    [    1.162970] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
    [    1.163007] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT0._GTF] (Node ffff8cd8400cc988), AE_NOT_FOUND (20160930/psparse-543)
    [    1.173535] ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20160930/psargs-359)
    [    1.173571] ACPI Error: Method parse/execution failed [\_SB.PCI0.SAT0.SPT5._GTF] (Node ffff8cd8400cc7f8), AE_NOT_FOUND (20160930/psparse-543)
    [   12.319188] ACPI Warning: SystemIO range 0x0000000000000428-0x000000000000042F conflicts with OpRegion 0x0000000000000400-0x000000000000047F (\PMIO) (20160930/utaddress-247)
    [   12.319193] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
    [   12.319195] ACPI Warning: SystemIO range 0x0000000000000540-0x000000000000054F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)
    [   12.319197] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
    [   12.319198] ACPI Warning: SystemIO range 0x0000000000000530-0x000000000000053F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)
    [   12.319200] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
    [   12.319201] ACPI Warning: SystemIO range 0x0000000000000500-0x000000000000052F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GPIO) (20160930/utaddress-247)
    

    And in 4.4 there are no such curses. I'm depressed.
     
  13. gaz

    gaz New Member

    Joined:
    Aug 20, 2016
    Messages:
    14
    Likes Received:
    1
    Installing on a Supermicro X11-ssi-ln4f motherboard, using the ISO mounted through the IPMI interface. Installation fails at 100%. The detail log shows:

    Errors were encountered while processing:
    postfix
    bsd-mailx
    pve-manager
    proxmox-ve
    command 'chroot /target dpkg --force-confold --configure -a' failed with exit code 1 at /usr/bin/proxinstall line 385

    This happened first on a ZFS 2-drive mirror and then with installation on a single drive with ext4.
     
  14. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    at what point? if this error is reproducible, please open a new thread or bug report and we can figure out steps to debug this.

    AFAICT, this is just the newer kernel making more noise about (potentially) buggy BIOS implementations. I get similar messages on my Skylake work station, with no ill effects.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    please try booting in debug mode and open a new thread with the contents of /tmp/install.log after the installation has failed.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    could you please open a new thread with the complete error message from the debug log?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  17. Nicerocko

    Nicerocko New Member

    Joined:
    Apr 10, 2017
    Messages:
    1
    Likes Received:
    0
    Hi I just install Proxmox 5 Beta 1 for the first time. I try a few virtualisation software including vmware and they all not working on the new AMD CPU (Ryzen). To boot linux on that CPU we need a ressent Kernel. Now my test Lab is up and running.
     
  18. Alan Robertson

    Alan Robertson New Member

    Joined:
    Apr 10, 2017
    Messages:
    3
    Likes Received:
    0
    Have just built a 4 node cluster on Beta5, with Ceph (luminous)... I'm having live migration issues.

    Initially, I was able to live migrate a VM from Node1 to Node2, and all went well...

    When attempting to move it back from Node2 to Node1, I'm getting the following output:

    Apr 10 13:42:31 starting migration of VM 100 to node 'Node1' (10.0.0.11)
    Apr 10 13:42:31 copying disk images
    Apr 10 13:42:31 starting VM 100 on remote node 'Node1'
    Apr 10 13:42:33 start remote tunnel
    Apr 10 13:42:33 starting online/live migration on unix:/run/qemu-server/100.migrate
    Apr 10 13:42:33 migrate_set_speed: 8589934592
    Apr 10 13:42:33 migrate_set_downtime: 0.1
    Apr 10 13:42:33 set migration_caps
    Apr 10 13:42:33 set cachesize: 107374182
    Apr 10 13:42:33 start migrate command to unix:/run/qemu-server/100.migrate
    channel 2: open failed: administratively prohibited: open failed

    Apr 10 13:42:35 migration status error: failed
    Apr 10 13:42:35 ERROR: online migrate failure - aborting
    Apr 10 13:42:35 aborting phase 2 - cleanup resources
    Apr 10 13:42:35 migrate_cancel
    Apr 10 13:42:37 ERROR: migration finished with problems (duration 00:00:06)
    TASK ERROR: migration problems

    ----
    UPDATE:

    These were fresh .ISO installs. After updating the systems (apt-get update / apt-get install), the problem is resolved... SSHD was among the packages updated during the refresh.

    Post update, everything is mow working as expected.
     
    #58 Alan Robertson, Apr 10, 2017
    Last edited: Apr 10, 2017
  19. TheXO

    TheXO New Member

    Joined:
    Apr 11, 2017
    Messages:
    2
    Likes Received:
    0
    I just upgraded to proxmox 5 from 4.4. Start/Shutdown all outputs this: pastebin.com/2RW3tkB5 (can't do bulk actions, tried to reboot. no luck)
    pveversion -v outputs: pastebin.com/exXDBDVZ

    Followed the exact update/upgrade guide on wiki.
     
  20. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    known bug, will be fixed with the next round of updates (probably later today)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice