Proxmox VE 6.0 released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jul 16, 2019.

  1. Richard Peters

    Richard Peters New Member
    Proxmox Subscriber

    Joined:
    Dec 24, 2016
    Messages:
    13
    Likes Received:
    0
    I upgraded 2 boxes. Both upgrades seemed to go smoothly, but on one machine I am unable to access the shell or the console for any of the VMs on it. I get the following error:

    /root/.ssh/config line 1: Bad SSH2 cipher spec 'blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc'.

    The /root/.ssh/config file on both machines is identical:
    Ciphers blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc

    I can access the machine remotely by ssh and I can also ssh into each of the VMs. I just can't access the shell or the consoles from the GUI.

    Any help would be appreciated.
     
  2. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,296
    Likes Received:
    188
    Most happens in access control, but as they keys are on the clustered config filesystem (pmxcfs) and the rotation must be locked (so that not multiple nodes try to rotate it at the same time) that one is involved to. The access control code path only gets triggered on logins, but we wanted to do rotations even if no login happened, so the "pvestat daemon" also calls the rotation method if it's older than 24 hours, so that one is involved but only indirectly (and it would work without that too).
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,296
    Likes Received:
    188
    Did you used our pve5to6 checklist script? That normally warns about those lines. (if not this should be a reminder for remaining nodes and for others, please use it!)

    You can just delete that line.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. Stefano Giunchi

    Proxmox Subscriber

    Joined:
    Jan 17, 2016
    Messages:
    46
    Likes Received:
    2
    I see that corosync v3 does not actually support multicast. AFAIK unicast, with corosync v2, was suggested only for a maximum o 4 nodes. Is that true with v3, too? Which are the new limits?
     
    bfwdd likes this.
  5. stark

    stark New Member

    Joined:
    Feb 23, 2019
    Messages:
    5
    Likes Received:
    2
    Anyone else using Intel X520 10Gbit cards? I'm in the middle of upgrading my three nodes to v6 and while everything appears to be going very, very well, checking kernel logs reveals:
    Code:
    ixgbe 0000:03:00.0: Warning firmware error detected FWSW: 0x00000000
    It doesn't appear to cause any issues, but every second that error appears and makes syslog unreadable.
     
  6. ShinigamiLY

    ShinigamiLY New Member

    Joined:
    Jul 16, 2019
    Messages:
    3
    Likes Received:
    0
    any way to check if trim is working?
     
  7. Richard Peters

    Richard Peters New Member
    Proxmox Subscriber

    Joined:
    Dec 24, 2016
    Messages:
    13
    Likes Received:
    0
    So I had run pve5to6 on one of the machines. They have the same configuration. One works, one doesn't. I deleted the line as you suggested and I still have the same problem.
     
  8. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,296
    Likes Received:
    188
    Depending on the switches used (latency and packets per seconds are much more important than bandwidth) one should be able to have about 16 - 20 nodes with commodity hardware, very fast (dedicated) switches and NICs and dedicated CPU processing power can help to achieve more, but then it's maybe easier to have less but "bigger" nodes.

    Also kronosnet has some plans to (re-)integrate multicast, but no clear timeline, if that will happen PVE will try hard to support both.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    1,296
    Likes Received:
    188
    OK, I have to say I miss read your post and thought you had issues with the host shell, not the VM or CT console/shell. Firewall issue? Does the VM/CT still runs?
    You probably should open another new thread regarding this, easier to help there and less "noise" here.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. Richard Peters

    Richard Peters New Member
    Proxmox Subscriber

    Joined:
    Dec 24, 2016
    Messages:
    13
    Likes Received:
    0
    I neglected to mention that the 2 machines are in a cluster. When I deleted the contents of /root/.ssh/config on both machines everything worked. Thanks for your help.
     
  11. IxsharpxI

    IxsharpxI New Member

    Joined:
    Jun 18, 2019
    Messages:
    2
    Likes Received:
    0
    i get this at the bottom of apt update

    E: The repository 'ttps://enterprise.proxmox.com/debian/pve buster Release' does not have a Release file.
    N: Updating from such a repository can't be done securely, and is therefore disabled by default.
    N: See apt-secure(8) manpage for repository creation and user configuration details.

    is this what im supposed to see?
     
  12. txsastre

    txsastre New Member

    Joined:
    Jan 6, 2015
    Messages:
    23
    Likes Received:
    2
    Does it have multi datacenter GUI ?
     
    hacman likes this.
  13. Compizfox

    Compizfox New Member

    Joined:
    Apr 15, 2017
    Messages:
    9
    Likes Received:
    0
    I'm trying to upgrade a Proxmox 5.4 box to 6.0. pve5to6 did not report any problems.

    My Apt is stuck on:

    Code:
    Setting up lxc-pve (3.1.0-61)
    It has been running for over an hour now and doesn't seem to get anywhere. Any advice?
     
  14. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,642
    Likes Received:
    420
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,642
    Likes Received:
    420
    Not in this release.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    hacman likes this.
  16. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    check with "ps faxl" or similar where the configure call (it is one of the children of the running apt process) is blocking..
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  17. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    this and similar issues are the main reason we switched to a non-Grub UEFI bootloader setup with 6.0 - the Grub ZFS implementation is nearly unmaintained and severely behind the actual ZFS on Linux code base (it is basically a read-only parser for the on-disk ZFS data structures from a couple of years ago), with some as-yet-unfixed but hard/impossible to reproduce bugs that can lead to unbootable pools.. all the writes from the dist-upgrade probably made some on-disk structure line up in exactly one of those ways that Grub chokes on it. you can try to randomly copy/move/.. the kernel and initrd files in /boot around in the hopes of them being rewritten in a way that Grub "likes" again..

    but the sensible way forward if you still have free space (or even ESP partitions, like if the server was setup with 5.4) on your vdev disks is to use "pve-efiboot-tool" to opt into the new bootloader setup. if that is not an option, you likely need to setup some sort of extra boot device that is not on ZFS, or redo the new bootloader setup by backup - reinstall - restore. we tried hard to investigate and fix these issues within Grub (I lost track of the number of hours I spent digging through Grub debug output via serial console sometime last year, and can personally attest that there are many many more fun ways to spend your time ;)), but in the end it is sometimes easier to cut your losses and start from scratch. as an intermediate solution / quick fix to get your system booted again, consider moving or copying your /boot partition to some external medium like a high-quality usb disk, or a spare disk if you have one.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. noname

    noname New Member

    Joined:
    May 14, 2014
    Messages:
    14
    Likes Received:
    0
    what a great news, thanks to all proxmox team . question >>> it is ready production to use zfs root on NVME disks ?
     
  19. Onigami

    Onigami New Member

    Joined:
    Jan 14, 2017
    Messages:
    6
    Likes Received:
    1
    Hello guys! I just installed Proxmox 6 on a four disk raidz2 everything went fine!

    So glad to finally be able to install Proxmox with UEFI and ZFS!
     
  20. Jabes

    Jabes New Member

    Joined:
    Jul 11, 2019
    Messages:
    2
    Likes Received:
    1
    I’m using X520 in my nodes - not upgraded yet though. Inclined now to wait until you work this out!
     
    RokaKen likes this.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice