Proxmox 6 to 7 upgrade after a couple years away...

jmaggart

New Member
Jun 13, 2023
7
0
1
I haven't been on this box in a couple of years as I need to upgrade hardware. I finally got around to it and now I am trying to make the 6 to 7 upgrade but am hitting this wall while trying to follow the manual. I know enough about linux and proxmox to be dangerous and that's about it. I think it has something to do with the kernel...?

I run apt update and everything looks fine and then when I try to run apt dist-upgrade I get this...

Code:
root@pve:~# apt update

Hit:1 http://ftp.us.debian.org/debian buster InRelease

Hit:2 http://ftp.us.debian.org/debian buster-updates InRelease

Hit:3 http://security.debian.org buster/updates InRelease

Hit:4 http://download.proxmox.com/debian/ceph-octopus buster InRelease

Hit:5 http://download.proxmox.com/debian/pve buster InRelease

Reading package lists... Done

Building dependency tree

Reading state information... Done

All packages are up to date.

root@pve:~# apt dist-upgrade

Reading package lists... Done

Building dependency tree

Reading state information... Done

Calculating upgrade... Done

0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

4 not fully installed or removed.

After this operation, 0 B of additional disk space will be used.

Do you want to continue? [Y/n] y

Setting up pve-kernel-5.4.203-1-pve (5.4.203-1) ...

Examining /etc/kernel/postinst.d.

run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.4.203-1-pve /boot/vmlinuz-5.4.203-1-pve

run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.4.203-1-pve /boot/vmlinuz-5.4.203-1-pve

update-initramfs: Generating /boot/initrd.img-5.4.203-1-pve

run-parts: executing /etc/kernel/postinst.d/proxmox-auto-removal 5.4.203-1-pve /boot/vmlinuz-5.4.203-1-pve

run-parts: executing /etc/kernel/postinst.d/zz-proxmox-boot 5.4.203-1-pve /boot/vmlinuz-5.4.203-1-pve

Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..

No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.4.203-1-pve /boot/vmlinuz-5.4.203-1-pve

/usr/sbin/grub-mkconfig: 38: /etc/default/grub.d/grub.cfg: function: not found

run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127

Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/pve-kernel-5.4.203-1-pve.postinst line 19.

dpkg: error processing package pve-kernel-5.4.203-1-pve (--configure):

 installed pve-kernel-5.4.203-1-pve package post-installation script subprocess returned error exit status 2

Setting up grub-pc (2.06-3~deb10u3) ...

sh: 38: /etc/default/grub.d/grub.cfg: function: not found

sh: 41: /etc/default/grub.d/grub.cfg: save_env: not found

sh: 43: /etc/default/grub.d/grub.cfg: Syntax error: "}" unexpected

Installing for i386-pc platform.

File descriptor 3 (pipe:[121261]) leaked on vgs invocation. Parent PID 73709: grub-install.real

File descriptor 3 (pipe:[121261]) leaked on vgs invocation. Parent PID 73709: grub-install.real

File descriptor 3 (pipe:[121261]) leaked on vgs invocation. Parent PID 73709: grub-install.real

Installation finished. No error reported.

/usr/sbin/grub-mkconfig: 38: /etc/default/grub.d/grub.cfg: function: not found

dpkg: error processing package grub-pc (--configure):

 installed grub-pc package post-installation script subprocess returned error exit status 127

dpkg: dependency problems prevent configuration of pve-kernel-5.4:

 pve-kernel-5.4 depends on pve-kernel-5.4.203-1-pve; however:

  Package pve-kernel-5.4.203-1-pve is not configured yet.



dpkg: error processing package pve-kernel-5.4 (--configure):

 dependency problems - leaving unconfigured

dpkg: dependency problems prevent configuration of proxmox-ve:

 proxmox-ve depends on pve-kernel-5.4; however:

  Package pve-kernel-5.4 is not configured yet.



dpkg: error processing package proxmox-ve (--configure):

 dependency problems - leaving unconfigured

Errors were encountered while processing:

 pve-kernel-5.4.203-1-pve

 grub-pc

 pve-kernel-5.4

 proxmox-ve

E: Sub-process /usr/bin/dpkg returned an error code (1)


Here is my output of pveversion where I can see that proxmox-ve isn't installed correctly...


Code:
root@pve:~# pveversion -v

proxmox-ve: not correctly installed (running kernel: 5.4.73-1-pve)

pve-manager: 6.4-15 (running version: 6.4-15/af7986e6)

pve-kernel-helper: 6.4-20

pve-kernel-5.4.73-1-pve: 5.4.73-1

ceph: 15.2.17-pve1~bpo10

ceph-fuse: 15.2.17-pve1~bpo10

corosync: 3.1.5-pve2~bpo10+1

criu: 3.11-3

glusterfs-client: 5.5-3

ifupdown: residual config

ifupdown2: 3.0.0-1+pve4~bpo10

ksm-control-daemon: 1.3-1

libjs-extjs: 6.0.1-10

libknet1: 1.22-pve2~bpo10+1

libproxmox-acme-perl: 1.1.0

libproxmox-backup-qemu0: 1.1.0-1

libpve-access-control: 6.4-3

libpve-apiclient-perl: 3.1-3

libpve-common-perl: 6.4-5

libpve-guest-common-perl: 3.1-5

libpve-http-server-perl: 3.2-5

libpve-storage-perl: 6.4-1

libqb0: 1.0.5-1

libspice-server1: 0.14.2-4~pve6+1

lvm2: 2.03.02-pve4

lxc-pve: 4.0.6-2

lxcfs: 4.0.6-pve1

novnc-pve: 1.1.0-1

proxmox-backup-client: 1.1.14-1

proxmox-mini-journalreader: 1.1-1

proxmox-widget-toolkit: 2.6-2

pve-cluster: 6.4-1

pve-container: 3.3-6

pve-docs: 6.4-2

pve-edk2-firmware: 2.20200531-1

pve-firewall: 4.1-4

pve-firmware: 3.3-2

pve-ha-manager: 3.1-1

pve-i18n: 2.3-1

pve-qemu-kvm: 5.2.0-8

pve-xtermjs: 4.7.0-3

qemu-server: 6.4-2

smartmontools: 7.2-pve2

spiceterm: 3.1-1

vncterm: 1.6-2

zfsutils-linux: 2.0.7-pve1
 
Last edited:
Bash:
root@pve:~# cat /etc/default/grub.d/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  set have_grubenv=true
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="0"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}
function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

if [ x$feature_default_font_path = xy ] ; then
   font=unicode
else
insmod part_gpt
insmod lvm
insmod ext2
set root='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'  9c25aff1-8eaa-43a9-b169-875f38f9f3e4
else
  search --no-floppy --fs-uuid --set=root 9c25aff1-8eaa-43a9-b169-875f38f9f3e4
fi
    font="/usr/share/grub/unicode.pf2"
fi

if loadfont $font ; then
  set gfxmode=auto
  load_video
  insmod gfxterm
  set locale_dir=$prefix/locale
  set lang=en_US
  insmod gettext
fi
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
  set timeout=30
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
  # Fallback normal timeout code in case the timeout_style feature is
  # unavailable.
  else
    set timeout=5
  fi
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###
set menu_color_normal=cyan/blue
set menu_color_highlight=white/blue
### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
        set gfxpayload="${1}"
}
set linux_gfx_mode=
export linux_gfx_mode
menuentry 'Proxmox Virtual Environment GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9c25aff1-8eaa-43a9-b169-875f38f9f3e4' {
        load_video
        insmod gzio
        if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
        insmod part_gpt
        insmod lvm
        insmod ext2
        set root='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'  9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        else
          search --no-floppy --fs-uuid --set=root 9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        fi
        echo    'Loading Linux 5.4.73-1-pve ...'
        linux   /boot/vmlinuz-5.4.73-1-pve root=/dev/mapper/pve-root ro  quiet amd_iommu=on amd_iommu=on
        echo    'Loading initial ramdisk ...'
        initrd  /boot/initrd.img-5.4.73-1-pve
}
submenu 'Advanced options for Proxmox Virtual Environment GNU/Linux' $menuentry_id_option 'gnulinux-advanced-9c25aff1-8eaa-43a9-b169-875f38f9f3e4' {
        menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 5.4.73-1-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.4.73-1-pve-advanced-9c25aff1-8eaa-43a9-b169-875f38f9f3e4' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                insmod lvm
                insmod ext2
                set root='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'
                if [ x$feature_platform_search_hint = xy ]; then
                  search --no-floppy --fs-uuid --set=root --hint='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'  9c25aff1-8eaa-43a9-b169-875f38f9f3e4
                else
                  search --no-floppy --fs-uuid --set=root 9c25aff1-8eaa-43a9-b169-875f38f9f3e4
                fi
                echo    'Loading Linux 5.4.73-1-pve ...'
                linux   /boot/vmlinuz-5.4.73-1-pve root=/dev/mapper/pve-root ro  quiet amd_iommu=on amd_iommu=on
                echo    'Loading initial ramdisk ...'
                initrd  /boot/initrd.img-5.4.73-1-pve
        }
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/20_memtest86+ ###
menuentry "Memory test (memtest86+)" {
        insmod part_gpt
        insmod lvm
        insmod ext2
        set root='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'  9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        else
          search --no-floppy --fs-uuid --set=root 9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        fi
        linux16 /boot/memtest86+.bin
}
menuentry "Memory test (memtest86+, serial console 115200)" {
        insmod part_gpt
        insmod lvm
        insmod ext2
        set root='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'  9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        else
          search --no-floppy --fs-uuid --set=root 9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        fi
        linux16 /boot/memtest86+.bin console=ttyS0,115200n8
}
menuentry "Memory test (memtest86+, experimental multiboot)" {
        insmod part_gpt
        insmod lvm
        insmod ext2
        set root='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'  9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        else
          search --no-floppy --fs-uuid --set=root 9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        fi
        multiboot       /boot/memtest86+_multiboot.bin
}
menuentry "Memory test (memtest86+, serial console 115200, experimental multiboot)" {
        insmod part_gpt
        insmod lvm
        insmod ext2
        set root='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint='lvmid/3cYt8d-xgM0-Qb5j-f9rt-Rvc2-v2fj-g1zymV/jQkUJ0-ivpB-0Uiv-4sYq-zas8-RHh2-qNu8Y4'  9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        else
          search --no-floppy --fs-uuid --set=root 9c25aff1-8eaa-43a9-b169-875f38f9f3e4
        fi
        multiboot       /boot/memtest86+_multiboot.bin console=ttyS0,115200n8
}
### END /etc/grub.d/20_memtest86+ ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/30_uefi-firmware ###
menuentry 'System setup' $menuentry_id_option 'uefi-firmware' {
        fwsetup
}
### END /etc/grub.d/30_uefi-firmware ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f  ${config_directory}/custom.cfg ]; then
  source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
  source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
 
Bash:
root@pve:~# dpkg -S /etc/default/grub.d/grub.cfg
dpkg-query: no path found matching pattern /etc/default/grub.d/grub.cfg
 
That was it! apt dist-upgrade completed successfully.

Here is my second issue. When I set up this node ages ago I evidently added ceph. I haven't and don't plan on using it but have seen people with struggles in upgrading after purging it. When I run pve6to7 --full I get 2 warnings on ceph. Is this a concern before upgrading?

Bash:
root@pve:/etc/pve# pve6to7
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages uptodate

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 6.4-1

Checking running kernel version..
PASS: expected running kernel '5.4.73-1-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

INFO: hyper-converged ceph setup detected!
INFO: getting Ceph status/health information..
WARN: Ceph health reported as 'HEALTH_WARN'.
      Use the PVE dashboard or 'ceph -s' to determine the specific issues and try to resolve them.
INFO: getting Ceph daemon versions..
PASS: single running version detected for daemon type monitor.
PASS: single running version detected for daemon type manager.
SKIP: no running instances detected for daemon type MDS.
SKIP: no running instances detected for daemon type OSD.
PASS: single running overall version detected for all Ceph daemon types.
WARN: 'noout' flag not set - recommended to prevent rebalancing during cluster-wide upgrades.
INFO: checking Ceph config..

= CHECKING CONFIGURED STORAGES =

file /etc/pve/storage.cfg line 30 (section 'local') - unable to parse value of 'prune-backups': invalid format - value without key, but schema does not define a default key

PASS: storage 'Local-Proxmox' enabled and active.
PASS: storage 'LocalVM-spinning' enabled and active.
PASS: storage 'USBStorageISO' enabled and active.
PASS: storage 'local' enabled and active.
PASS: storage 'pve_vm_backup' enabled and active.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if the local node's hostname 'pve' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.68.83' configured and active on single interface.
INFO: Checking backup retention settings..
PASS: no problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking custom roles for pool permissions..
INFO: Checking node and guest description/note legnth..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking storage content type configuration..
PASS: no problems found
INFO: Checking if the suite for the Debian security repository is correct..
INFO: Make sure to change the suite of the Debian security repository from 'buster/updates' to 'bullseye-security' - in /etc/apt/sources.list:6
SKIP: NOTE: Expensive checks, like CT cgroupv2 compat, not performed without '--full' parameter

= SUMMARY =

TOTAL:    28
PASSED:   22
SKIPPED:  4
WARNINGS: 2
FAILURES: 0

ATTENTION: Please check the output for detailed information!

And the output for ceph -s

Bash:
root@pve:/etc/pve# ceph -s
  cluster:
    id:     a7d0473f-8ed9-44d1-bb8a-719a704e10d7
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
            Reduced data availability: 1 pg inactive
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum pve (age 75m)
    mgr: pve(active, since 74m)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             1 unknown
 
So I made a backup of my host and just went for the upgrade. Everything went perfect. Thanks for the help!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!