Proxmox VE 3.4 Upgrage to 4.x

mr.x

Well-Known Member
Feb 16, 2010
49
0
46
Hi all,

I updated my sourses.list with the following lines:
Code:
deb http://download.proxmox.com/debian jessie pvetest
and did an upgrade.
Unfortunately pve is now broken. I'm sorry, but I can not provide any details about pveversion as it is not available anymore .-/

Code:
root@server:/# pve <TAB><TAB>
pvecm  pvesm

Right now I can see this error message while running the update process

Code:
Setting up pve-cluster (4.0-21) ...
Restarting pve cluster filesystem: pve-cluster.
Can't locate PVE/RPCEnvironment.pm in @INC (you may need to install the PVE::RPCEnvironment module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.20.2 /usr/local/share/perl/5.20.2 /usr/lib/x86_64-linux-gnu/perl5/5.20 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.20 /usr/share/perl/5.20 /usr/local/lib/site_perl .) at /usr/share/perl5/PVE/CLI/pvecm.pm line 16.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/CLI/pvecm.pm line 16.
Compilation failed in require at /usr/bin/pvecm line 8.
BEGIN failed--compilation aborted at /usr/bin/pvecm line 8.
dpkg: error processing package pve-cluster (--configure):
 subprocess installed post-installation script returned error exit status 2
Errors were encountered while processing:
 pve-cluster
E: Sub-process /usr/bin/dpkg returned an error code (1)

Any ideas how to fix this?

Br
Mr.X
 
we will provide a update script quite soon, including a wiki page.

if you want to fix your current broken box, try the following (very short howto, experts only and no guarantee):


  1. remove proxmox ve 3.4 packages (apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware)
  2. adapt your sources.list, pointing all to jessie
  3. apt-get install pve-kernel-4.2.1-1-pve pve-firmware
  4. apt-get dist-upgrade
  5. reboot into the 4.2.1 kernel
  6. apt-get install proxmox-ve
 
Hi Tom,

thanks for your fast reply !

we will provide a update script quite soon, including a wiki page.

if you want to fix your current broken box, try the following (very short howto, experts only and no guarantee):


  1. remove proxmox ve 3.4 packages (apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware)
  2. adapt your sources.list, pointing all to jessie
  3. apt-get install pve-kernel-4.2.1-1-pve pve-firmware
  4. apt-get dist-upgrade
  5. reboot into the 4.2.1 kernel
  6. apt-get install proxmox-ve

Will my VM's survive such a procedure?

Br
Mr.X
 
Hi Tom,

thanks for your fast reply !



Will my VM's survive such a procedure?

Br
Mr.X

yes, but you should have valid backups before you start unsupported upgrades anyways.
 
Hi Tom,

short feedback.
All went fine excecpt 2 small steps I needed to execute before your first one

0.8 dpkg -r vzctrl
0.9 dpkg -r pve-cluster

we will provide a update script quite soon, including a wiki page.

if you want to fix your current broken box, try the following (very short howto, experts only and no guarantee):


  1. remove proxmox ve 3.4 packages (apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware)
  2. adapt your sources.list, pointing all to jessie
  3. apt-get install pve-kernel-4.2.1-1-pve pve-firmware
  4. apt-get dist-upgrade
  5. reboot into the 4.2.1 kernel
  6. apt-get install proxmox-ve

After the update proxmox was not startet, so I started all services by hand.
Thanks again, and have a nice weekend !

Br
Mr.X
 
I like living on the edge, so I think I'll try the same,

Worstcase I can always reinstall over the top, should have installed v4 to start with but went with 3.x even though it's just a personal box (i.e not doing anything critical)
 
I like living on the edge, so I think I'll try the same,

Worstcase I can always reinstall over the top, should have installed v4 to start with but went with 3.x even though it's just a personal box (i.e not doing anything critical)

Good luck !
 
Good luck !

Seems to be working other than an issue with LXC but I think I know why that is.

I think it's because I had a second Zpool and its root was mounted locally as a directory, it seems to be automounting subpools in the wrong directory which is tripping up the container creation scripts.