Proxmox VE 4.0 released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Oct 6, 2015.

  1. screenie

    screenie Member

    Joined:
    Jul 21, 2009
    Messages:
    146
    Likes Received:
    0
    We have a couple of multi node clusters running latest 3.4 without any issues and tried to re-install one 4 node cluster of them with pve 4;
    Base install was straight forward but run into issues with quorum when creating the cluster - all nodes were setup identical but one node couldn't join the cluster successfully - it hang at 'waiting for quorum...' the other nodes added them but the node itself did nothing and syslog showed:
    Tried several times to delete and re-add the node but no luck, so i installed the node from scratch and when adding i had the same issue again - with the -force option it was able to join;
    While testing i rebooted another node and after that containers could not started on this node - found also quorum messages in syslog:
    and this message again:
    After rebooting the node again it had no quorum issue;
    Same happened again on a another node after reboot - rebooting again and quorum issue is gone;
    Seems clustering/quorum is not as reliable as in 3.4 where i never saw this issue on any node;
    Anything changed or any idea what could casue this issue?

    Also not having the live migration feature implemented for containers let me decide to go back to 3.4;
    Seems LXC has to improve their tools to be useful, stopping containers to be able to move them is a No-Go for us - will stick with OpenVZ for the moment;
     
  2. spirit

    spirit Well-Known Member

    Joined:
    Apr 2, 2010
    Messages:
    3,323
    Likes Received:
    135
    maybe do you have a multicast problem.

    what is the result of

    #pvecm status

    ?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. screenie

    screenie Member

    Joined:
    Jul 21, 2009
    Messages:
    146
    Likes Received:
    0
    This is how it looked like on the remaining pve4 nodes:
    And 3.4 cluster is also running cluster sync via multicast where i never had a problem before - nothing has changed on the infrastructure;
     
  4. adoII

    adoII Member

    Joined:
    Jan 28, 2010
    Messages:
    124
    Likes Received:
    0
    Hi Spirit,
    did you have the possibility to write down how you did your upgrade ?

    I am also looking for the best method to upgrade a 5 node cluster from proxmox3 to proxmox 4 on the fly.
     
  5. spirit

    spirit Well-Known Member

    Joined:
    Apr 2, 2010
    Messages:
    3,323
    Likes Received:
    135

    Hi,

    I have finished to migrate a small 5 nodes cluster from proxmox3 to proxmox4,
    using qemu live migration.



    Here the howto:

    Code:
    [COLOR=#333333][FONT=monospace]requirements:[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]-------------[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]external storage (nfs,ceph).[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]don't have tested with clvm + iscsi, or local ceph(which should work)[/FONT][/COLOR]
    
    
    [COLOR=#333333][FONT=monospace]1)Upgrade a first node to proxmox 4.0 and recreate cluster[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]------------------------------------------------------------[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]Have an empty node,[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]then upgrade it to proxmox 4.0, folowing the current wiki[/FONT][/COLOR]
    
    
    [COLOR=#333333][FONT=monospace]# apt-get update && apt-get dist-upgrade[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get update[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get install pve-kernel-4.2.2-1-pve[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get dist-upgrade[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]reboot[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get remove pve-kernel-2.6.32-41-pve[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# pvecm create <clustername>[/FONT][/COLOR]
    
    
    [COLOR=#333333][FONT=monospace]2) upgrade second node[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]----------------------[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get update && apt-get dist-upgrade[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get update[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get install pve-kernel-4.2.2-1-pve[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get dist-upgrade[/FONT][/COLOR]
    
    
    [COLOR=#333333][FONT=monospace]---> here no reboot[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]if, apt return an error like:[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]"Setting up pve-manager (4.0-48) ...[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]Failed to get D-Bus connection: Unknown error -1[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]Failed to get D-Bus connection: Unknown error -1[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]dpkg: error processing package pve-manager (--configure):[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]subprocess installed post-installation script returned error exit status 1[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]dpkg: dependency problems prevent configuration of proxmox-ve:[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]proxmox-ve depends on pve-manager; however:[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]Package pve-manager is not configured yet.[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]dpkg: error processing package proxmox-ve (--configure):[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]dependency problems - leaving unconfigured[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]Errors were encountered while processing:"[/FONT][/COLOR]
    
    
    [COLOR=#333333][FONT=monospace]then,[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# touch /proxmox_install_mode[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# rm /proxmox_install_mode[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]now the tricky part[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]mount /etc/pve[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# /usr/bin/pmxcfs -l[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]add node to cluster[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# pvecm add ipofnode1 -force[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]close old corosync and delete old config[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# killall -9 corosync[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# /etc/init.d/pve-cluster stop[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# rm /var/lib/pve-cluster/config.db*[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]start new corosync and pve-cluster[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# corosync[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]# /etc/init.d/pve-cluster start[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]verify than you can write in /etc/pve/ and that's is correctly replicate on other proxmox4 nodes[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]#touch /etc/pve/test.txt[/FONT][/COLOR]
    [COLOR=#333333][FONT=monospace]#rm /etc/pve/test.txt[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]migrate vms (do it for each vmid)[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# qm migrate <vmid> <target_proxmox4_server> -online[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace](migrate must do done with cli, because pvestatd can't start without systemd, so gui is not working)[/FONT][/COLOR]
    
    
    [COLOR=#333333][FONT=monospace]# reboot node[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]3) do the same thing for next node(s)[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]4) when all nodes are migrated, remove[/FONT][/COLOR]
    
    [COLOR=#333333][FONT=monospace]# rm /etc/pve/cluster.conf[/FONT][/COLOR]
    
    
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    #85 spirit, Oct 27, 2015
    Last edited: Dec 2, 2015
  6. HBO

    HBO Member

    Joined:
    Dec 15, 2014
    Messages:
    247
    Likes Received:
    7
    ...
     
    #86 HBO, Oct 30, 2015
    Last edited: Nov 2, 2015
  7. p1v12002

    p1v12002 New Member

    Joined:
    Jun 17, 2014
    Messages:
    7
    Likes Received:
    1
    Hy everybody,

    does someone has encountered problems during installation on HP Proliant server DL385 G8 with P420i Raid Array controller?
    I'm seeng a "
    p420i lock up code 0x13" during the installation.
    I have tried to update server's firmware with last SPP 2015.10.0 including a Raid controller fw with 6.68 version, without emprovements.
    I would like put in evidence that the previous Proxmox VE 3.4 works fine.

    Thanks and regards
    Paolo Vola
     
  8. morph027

    morph027 Active Member

    Joined:
    Mar 22, 2013
    Messages:
    413
    Likes Received:
    51
    As this HP hotfix is from 2013, i guess it's included in your firmware...But here (Gen9, but basically the same, i guess) the error re-occurs. Looks more related to controller firmware instead of kernel.
     
  9. p1v12002

    p1v12002 New Member

    Joined:
    Jun 17, 2014
    Messages:
    7
    Likes Received:
    1
    Hy all,

    solved the problem of the array controller with a special patch , the installation now stops at the first screen of Agreement (mouse pointer locked) for driver issues USB keyboard and mouse .
    Does anyone encountered similar problems ?
    Tx in advance and regards
    paolo
     
  10. SamTzu

    SamTzu Member

    Joined:
    Mar 27, 2009
    Messages:
    356
    Likes Received:
    6
    Docker uses LXC so goodbye old OpenVZ.
     
  11. Phinitris

    Phinitris Member

    Joined:
    Jun 1, 2014
    Messages:
    83
    Likes Received:
    11
    Hey,
    don't think that OpenVZ will die. Especially for hosting I think it's better than LXC because it support CPULIMIT, IOLIMIT, IOPSLIMIT.

    Currently running a quite stable Proxmox OpenVZ Ploop cluster with integration for Graphite/Grafana, Traffic accounting/calculation, rsync backups,
    Duo Security Authentication and much more. Can not complain.

    http://prntscr.com/94p6hv

    I hope that OpenVZ will release Virtuozzo 7(3.1 Kernel).

    Best regards,
    Phinitris
     
  12. yield

    yield New Member

    Joined:
    Nov 19, 2015
    Messages:
    1
    Likes Received:
    0
    how about VDI on proxmox 4.0
     
  13. Florent

    Florent Member

    Joined:
    Apr 3, 2012
    Messages:
    91
    Likes Received:
    2
    If I understand well, there is no cluster upgrade procedure from 3.4 to 4.0 ? We need to re-create cluster from root so we loose all cluster configuration as users, permissions, etc ?
    I can't understand your strategy with this release. Think to people having cluster with dozens of nodes ... impossible to upgrade.
     
  14. adamb

    adamb Member
    Proxmox Subscriber

    Joined:
    Mar 1, 2012
    Messages:
    999
    Likes Received:
    24
    The HA stack is alot different and a ton of it is new. There honestly was no upgrade path for them. We have tons of pve 3.4 clusters out in the feild so I feel your pain, but there is nothing that can be done.

    Doesn't matter if you have 3 nodes or 16, upgrade procedure is going to be the same. Is the issue just downtime?
     
  15. Florent

    Florent Member

    Joined:
    Apr 3, 2012
    Messages:
    91
    Likes Received:
    2
    If I use procedure provided by spirit, it seems there's no downtime, isn't it ?

    Problem is that upgrade needs to be done "by hand", impossible to automate it with Ansible for example.

    And during upgrade, we have 2 clusters, not 1....
     
  16. adamb

    adamb Member
    Proxmox Subscriber

    Joined:
    Mar 1, 2012
    Messages:
    999
    Likes Received:
    24
    I wasn't aware of a online procedure, I have been doing all of mine offline. I would never want to automate an upgrade, sounds like a bad situation waiting to happen.

    Not sure what the issue is with having 2 clusters, do 1 at a time?
     
  17. Florent

    Florent Member

    Joined:
    Apr 3, 2012
    Messages:
    91
    Likes Received:
    2
    Yes I do 1 cluster at a time, but read procedure : impossible to mix 3.4 & 4.0 nodes in a same cluster. So procedure is to upgrade a first node, and create a new cluster on that node. So during upgrade, you have 2 clusters instead of one.
    When you have thousands of nodes, you can't do it by hand... that's not my case, but it seems that Proxmox is not used on large clusters...
     
  18. adamb

    adamb Member
    Proxmox Subscriber

    Joined:
    Mar 1, 2012
    Messages:
    999
    Likes Received:
    24
    Ahhh ok, understood. Yea I don't know of anyone running 1000's of nodes or even 100's for that matter.
     
  19. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,459
    Likes Received:
    310
    We alway try hard to make updates as easy as possible. But this time, some projects we depend on made totally incompatible changes.
    That makes it impossible to provide a fully automatic update.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  20. Florent

    Florent Member

    Joined:
    Apr 3, 2012
    Messages:
    91
    Likes Received:
    2
    Ok ok, I just say that's unusable in production environment.

    Hi spirit, thank you for your how-to, but I don't think it can work.
    When you run "pvecm add ipofnode1 -force" on a non-rebooted node, it will fail because it calls 'systemctl stop pve-cluster' and systemctl does not work yet (system not rebooted) :
    Code:
    pvecm add 192.168.0.203 -force
    node test2 already defined
    copy corosync auth key
    stopping pve-cluster service
    Failed to get D-Bus connection: Unknown error -1
    can't stop pve-cluster service
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice