Problem with pve-upgrade-1.9-to-2.0

stef1777

Active Member
Jan 31, 2010
178
8
38
Hello!

I tried the upgrade script on a testing 1.9 node.

The script failed because the script configure /etc/apt/sources.list with non available repository.

W: Failed to fetch http://volatile.debian.org/debian-volatile/dists/squeeze/volatile/main/binary-amd64/Packages 404 Not Found [IP: 2001:858:2:2::2 80]
W: Failed to fetch http://volatile.debian.org/debian-volatile/dists/squeeze/volatile/contrib/binary-amd64/Packages 404 Not Found [IP: 2001:858:2:2::2 80]
W: Failed to fetch http://volatile.debian.org/debian-volatile/dists/squeeze/volatile/non-free/binary-amd64/Packages 404 Not Found [IP: 2001:858:2:2::2 80]

I manually removed the volatile line and the script restarted fine.

And stopped later on this:

Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-7-pve
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
W: mdadm: no arrays defined in configuration file.
Errors were encountered while processing:
procps
hdparm
E: Sub-process /usr/bin/dpkg returned an error code (1)
minimal upgarde failed

Seems the script answer yes to replace config file by default and broke the procps and hdparm config.

Manually run dpkg --configure solve the problem.

And again stop:

Errors were encountered while processing:
cron
udev
keyboard-configuration
console-setup
ntp
at
nfs-common
openssh-server
bind9
postfix
apache2.2-common
apache2-mpm-prefork
apache2
libapache2-mod-perl2
libapache2-mod-apreq2
libapache2-request-perl
E: Sub-process /usr/bin/dpkg returned an error code (1)
dist-upgrade failed

Upgrade from 1.9 seems very hot...




pve-manager: 1.9-26 (pve-manager/1.9/6567)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-55+ovzfix-2
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.32-6-pve: 2.6.32-55+ovzfix-1
pve-kernel-2.6.32-7-pve: 2.6.32-55+ovzfix-2
qemu-server: 1.1-32
pve-firmware: 1.0-15
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-3pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-2
ksm-control-daemon: 1.0-6
 
Last edited:
Hello!

I tried the upgrade script on a testing 1.9 node.

The script failed because the script configure /etc/apt/sources.list with non available repository.

W: Failed to fetch http://volatile.debian.org/debian-volatile/dists/squeeze/volatile/main/binary-amd64/Packages 404 Not Found [IP: 2001:858:2:2::2 80]
W: Failed to fetch http://volatile.debian.org/debian-volatile/dists/squeeze/volatile/contrib/binary-amd64/Packages 404 Not Found [IP: 2001:858:2:2::2 80]
W: Failed to fetch http://volatile.debian.org/debian-volatile/dists/squeeze/volatile/non-free/binary-amd64/Packages 404 Not Found [IP: 2001:858:2:2::2 80]

no, volatile was configure by you or someone else, Proxmox VE never used this. looks like you got a customized installation. Remove all 1.9 package manually with apt, do a dist upgrade and install 2.0. or do a re-install.

pve-manager: 1.9-26 (pve-manager/1.9/6567)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-55+ovzfix-2
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.32-6-pve: 2.6.32-55+ovzfix-1
pve-kernel-2.6.32-7-pve: 2.6.32-55+ovzfix-2
qemu-server: 1.1-32
pve-firmware: 1.0-15
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-3pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-2
ksm-control-daemon: 1.0-6

just an additional note, your 1.9 is not up2date. a requirement for the upgrade script is that you run latest 1.9, see the upgrade docs - http://pve.proxmox.com/wiki/Upgrade_from_1.9_to_2.0#Requirements
 
You're right. apt.source was modified by hand. I removed the added lines. The rest is standard but the upgrade is completely unstable now.

Errors were encountered while processing:
pve-cluster
redhat-cluster-pve
fence-agents-pve
libpve-access-control
clvm
libpve-storage-perl
qemu-server
resource-agents-pve
pve-manager
vzctl
proxmox-ve-2.6.32
E: Sub-process /usr/bin/dpkg returned an error code (1)
minimal upgarde failed

1.9 was updated to be sure to have the last 1.9 release just before running the upgrade script.
 
Yes, but I don't find the right way. I tried dpkg --configure of each package manually and each time I've a dependency problem. Bad loop!

Sure that in the real life, I will prefer to reinstall from iso and reload saved VM.
 
Yuu can simply try to remove conflicting packages. The upgrade script will install them later anyways.
 
I've found the blocking package. Do you know why "Unable to get local IP address"

Setting up pve-cluster (1.0-26) ...
Starting pve cluster filesystem : pve-cluster[main] crit: Unable to get local IP address
(warning).
invoke-rc.d: initscript pve-cluster, action "start" failed.
dpkg: error processing pve-cluster (--configure):
subprocess installed post-installation script returned error exit status 255
Errors were encountered while processing:
pve-cluster
 
I've found the blocking package. Do you know why "Unable to get local IP address"

You should have an entry in /etc/hosts (for the name used in /etc/hostname).

Note: if you post both files if can tell you how to fix that.
 
Oops, seems that something emptied /etc/hosts. The file was completely empty. I corrected that.

After, that upgrade well finished.
 
Hello again me!

I finally well upgraded 2 testing nodes with success. Apart some problems during the apt-get dist-upgrade, the rest is fine.

I've a new problem.

sd-33092:~# ./pve-upgrade-1.9-to-2.0 --import
starting import
import file: /etc/pve/user.cfg
unable to open file '/etc/pve/user.cfg' - Permission denied at ./pve-upgrade-1.9-to-2.0 line 125.

The node contains one openvz container. The file /etc/pve/user.cfg don't exist. /etc/pve.old-1.9/ exist.

I don't know how to alive the container.
 
Here it goes!


sd-33092:~# ls -l /etc/pve
total 1
-r--r----- 1 root www-data 291 Apr 3 14:59 cluster.conf
lr-xr-x--- 1 root www-data 0 Jan 1 1970 local -> nodes/sd-33092
lr-xr-x--- 1 root www-data 0 Jan 1 1970 openvz -> nodes/sd-33092/openvz
lr-xr-x--- 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/sd-33092/qemu-server

sd-33092:~# ls -l /etc/pve.old-1.9/
total 40
-rw-r--r-- 1 root root 1457 Nov 14 13:00 cluster.cfg
drwxr-xr-x 2 root root 4096 Apr 3 14:24 master
-rw-r--r-- 1 root root 26 Nov 14 11:25 pve.cfg
-rw------- 1 root root 887 Nov 14 10:39 pve-root-ca.key
-rw-r--r-- 1 root root 1180 Nov 14 10:39 pve-root-ca.pem
-rw------- 1 root root 3 Nov 14 10:39 pve-root-ca.srl
-rw------- 1 root root 887 Nov 14 10:39 pve-ssl.key
-rw-r--r-- 1 root root 956 Nov 14 10:39 pve-ssl.pem
-rw-r--r-- 1 root root 13 Nov 14 11:25 qemu-server.cfg
-rw-r--r-- 1 root root 134 Nov 14 13:03 storage.cfg

Just to know: On the 2 nodes upgraded, I have to reinstall grub. The server was no longer booting after the upgrade.
 
Last edited:
sd-33092:~# pvecm updatecerts
no quorum - unable to update files

sd-33092:~# ./pve-upgrade-1.9-to-2.0 --import
starting import
import file: /etc/pve/user.cfg
unable to open file '/etc/pve/user.cfg' - Permission denied at ./pve-upgrade-1.9-to-2.0 line 125.

No luke!
 
That still looks like the hostname bug. What is the output of:

# /etc/init.d/pve-cluster stop
# /etc/init.d/pve-cluster start
 
No output message.

/etc/hosts and /etc/hostname are right. But when I've upgraded the server, hosts was empty. I corrected that during the upgrade.
 
You claim you have a single node, but I just saw that you have a cluster configuration file '/etc/pve/cluster.conf'?!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!