Probem after upgrade 4.4 to 5.x

JesusM

Member
Nov 27, 2014
36
0
6
Hi,
i follow this link:

https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0

and when finish the installation don't working.

For fix i need to used this command:

apt install proxmox-ve postfix open-iscsi

Now i can access to miip:8006

but have the error with network failure.

i used:
apt-get -fy install

but don't work any VM. @ramses @fireon @udo

Any can help me?


root@server:~# pveversion -v
proxmox-ve: 5.1-38 (running kernel: 4.4.98-5-pve)
pve-manager: 5.1-43 (running version: 5.1-43/bdb08029)
pve-kernel-4.4.98-5-pve: 4.4.98-105
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.13.13-5-pve: 4.13.13-38
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-20
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-6
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.4-pve2~bpo9
 
Code:
root@server:~# pvereport

==== general system info ====

# hostname
server

# pveversion --verbose
proxmox-ve: 5.1-38 (running kernel: 4.4.98-5-pve)
pve-manager: 5.1-43 (running version: 5.1-43/bdb08029)
pve-kernel-4.4.98-5-pve: 4.4.98-105
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.13.13-5-pve: 4.13.13-38
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-20
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-6
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.4-pve2~bpo9

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 86
Model name:            Intel(R) Xeon(R) CPU D-1521 @ 2.40GHz
Stepping:              3
CPU MHz:               2702.625
CPU max MHz:           2700.0000
CPU min MHz:           800.0000
BogoMIPS:              4799.95
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              6144K
NUMA node0 CPU(s):     0-7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts

# pvesm status
Name          Type     Status           Total            Used       Available        %
Backup         nfs     active       524288000        56821248       467466752   10.84%
local          dir     active       435005112        62148044       350753416   14.29%
sata1          dir     active        20026236         5978436        13023852   29.85%
sata2          dir     active        20026236         5978436        13023852   29.85%
ssd2           dir     active        20026236         5978436        13023852   29.85%

# cat /etc/fstab
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/sda1       /       ext3    errors=remount-ro       0       1
/dev/sda2       swap    swap    defaults        0       0
/dev/pve/data   /var/lib/vz     ext3    defaults        1       2
proc            /proc   proc    defaults        0       0
sysfs           /sys    sysfs   defaults        0       0
 
Last edited:
Code:
# mount
sysfs on /sys type sysfs (rw,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=32920104k,nr_inodes=8230026,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=6587220k,mode=755)
/dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (rw,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,data=ordered)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
 
Last edited:
Code:
# iptables-save
# Generated by iptables-save v1.6.0 on Sun Feb 18 16:58:03 2018
*filter
:INPUT ACCEPT [71522:10412442]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [78899:12374040]
COMMIT
# Completed on Sun Feb 18 16:58:03 2018

==== info about disks ====

# lsblk --ascii
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda            8:0    0 447.1G  0 disk
|-sda1         8:1    0  19.5G  0 part /
|-sda2         8:2    0     2G  0 part [SWAP]
|-sda3         8:3    0     1K  0 part
`-sda5         8:5    0 425.6G  0 part
  `-pve-data 251:0    0 421.6G  0 lvm  /var/lib/vz
sdb            8:16   0 447.1G  0 disk
`-sdb1         8:17   0 447.1G  0 part
sdc            8:32   0   1.8T  0 disk
`-sdc1         8:33   0   1.8T  0 part
sdd            8:48   0   1.8T  0 disk
`-sdd1         8:49   0   1.8T  0 part

==== info about volumes ====

# lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve -wi-ao---- 421.59g                                                  

# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   1   0 wz--n- 425.59g 4.00g

# zpool status
no pools available

# zfs list
no datasets available
 
Last edited:
Code:
# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost.localdomain localhost
164.132.170.215 server      server
# The following lines are desirable for IPv6 capable hosts
#(added automatically by netbase upgrade)
::1     ip6-localhost ip6-loopback
feo0::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Hi,
the important entry pvelocalhost in /etc/hosts is missing!!

are the host is pingable (perhaps renaming of network devices?!)?

Udo
 
Hi,
the important entry pvelocalhost in /etc/hosts is missing!!

are the host is pingable (perhaps renaming of network devices?!)?

Udo
And what need to add?
The Server is pingable, the vm no.


ip localhost.localdomain server pvelocalhost

is that correct?

or this?
ip server.server server pvelocalhost

@udo
 
Last edited:
And what need to add?
The Server is pingable, the vm no.


164.132.170.215 localhost.localdomain server pvelocalhost

is that correct?

or this?
164.132.170.215 server.server server pvelocalhost

@udo
Hi,
Code:
164.132.170.215 server.server server pvelocalhost
this if you use https://server.server:8006 ...

But you wrote, that you have renamed your pvehost! That's not the best idea...
All configs are stored hostname-related in the /etc/pve virtual filesystem.
And you also created an one-node-cluster... why?
I suggest, you renamed the node back and create an backup from the content below /etc/pve
After that you can play again with node-renaming...

Udo
 
Hi,
Code:
ip server.server server pvelocalhost
this if you use https://server.server:8006 ...

But you wrote, that you have renamed your pvehost! That's not the best idea...
All configs are stored hostname-related in the /etc/pve virtual filesystem.
And you also created an one-node-cluster... why?
I suggest, you renamed the node back and create an backup from the content below /etc/pve
After that you can play again with node-renaming...

Udo
I think i don't explain correctly.
I only renamed in this post, the name of the server is other, is censured.
Normally i use the ip:8006

ip server pvelocalhost

Now have network.

But now have other problem.

TASK ERROR: volume 'ssd2:iso/CentOS-7-x86_64-Minimal-1511.iso' does not exist
TASK ERROR: volume 'sata1:iso/FreeBSD-10.4-RELEASE-amd64-disc1.iso' does not exist

don't detect the hdd's and ssd's mounts? proxmox.PNG proxmox.PNG

@udo
 
Last edited:

I highly recommend you take a bit more time in testing and playing around in a testlab, also reading docs and other forum posts will help.

If you have clear error messages like "iso' does not exist" , try to understand them and fix it.
 
I highly recommend you take a bit more time in testing and playing around in a testlab, also reading docs and other forum posts will help.

If you have clear error messages like "iso' does not exist" , try to understand them and fix it.
my problem now is that the hard disks and ssd are empty, there is no data.
the pve / proxmox folders are empty.
my problem is not ISO, nor to mount the disks.
It's that the data is not there.

they are only in / var / lib / vz which is from the ssd of the installation.
The 2 hdd and the other ssd has nothing information, everything is empty.

I have been reading and looking for information and I have not found anything like this.

@tom @udo
 
you have multiple storages defined, but there is nothing mounted there, did you not add your disks to your /etc/fstab ?
 
you have multiple storages defined, but there is nothing mounted there, did you not add your disks to your /etc/fstab ?
after upgrade the disks missing of fstab.

See this img.

fdisk -l:
fstab.PNG

Storage in proxmox:

storage proxmox.PNG
Storage configuration in proxmox:
storage.PNG

I don't know why can't add to fsdisk.

@dcsapak @udo
 
you have to add your disks to your /etc/fstab
see 'man fstab' for details