Hi All,
I was following the upgrade procedure, but I think I have misunderstood the repository edits needed in /etc/apt/source.list .... I deleted it, after I thought I had the 9 repository in place.
I was following the upgrade procedure, but I think I have misunderstood the repository edits needed in /etc/apt/source.list .... I deleted it, after I thought I had the 9 repository in place.
Either way .. I have a partially working box .. I can ssh in, and the VMs come up and run (phew) .. but I can't get in to the GUI anymore, the SSL warning comes up .. just never draws a login web page (all grey screen).
This is the output I get post upgrade ...
Code:root@pve:~# pve8to9 --full = CHECKING VERSION INFORMATION FOR PVE PACKAGES = Checking for package updates.. WARN: updates for the following packages are available: librados2, dmeventd, udev, ceph-fuse, libpve-rs-perl, corosync, libknet1t64, libnozzle1t64, rrdcached, librrd8t64, zfs-zed, libzfs6linux, proxmox-websocket-tunnel, libpam-systemd, zfs-initramfs, pve-qemu-kvm, libnvpair3linux, proxmox-mail-forward, python3-ceph-common, librbd1, pve-ha-manager, grub-pc-bin, lxcfs, swtpm-libs, pve-lxc-syscalld, apparmor, libproxmox-backup-qemu0, swtpm-tools, librgw2, libuutil3linux, librrds-perl, librrd8t64, ceph-common, liblvm2cmd2.03, libsystemd0, libnss-systemd, libapparmor1, vncterm, proxmox-grub, swtpm, pve-cluster, librrd8t64, systemd, libproxmox-rs-perl, libudev1, libcrypt-openssl-rsa-perl, proxmox-ve, lxc-pve, grub-efi-amd64, proxmox-backup-file-restore, python3-cephfs, lvm2, pve-esxi-import-tools, libcephfs2, qemu-server, pve-container, proxmox-offline-mirror-helper, grub-efi-amd64-signed, dmsetup, libtpms0, libradosstriper1, proxmox-backup-client, libdevmapper-event1.02.1, libpve-network-api-perl, grub-efi-amd64-bin, grub-efi-amd64-unsigned, grub2-common, proxmox-mini-journalreader, smartmontools, python3-rbd, python3-rgw, libpve-http-server-perl, proxmox-firewall, pve-manager, pve-yew-mobile-gui, libpve-network-perl, grub-common, libsystemd-shared, librados2-perl, systemd-sysv, pve-firewall, python3-ceph-argparse, libpve-u2f-server-perl, libdevmapper1.02.1, spiceterm, zfsutils-linux, libzfs6linux, libzpool6linux, proxmox-termproxy, python3-rados Checking proxmox-ve package version.. PASS: proxmox-ve package has version >= 8.4-0 Checking running kernel version.. WARN: unexpected running and installed kernel '6.17.2-1-pve'. = CHECKING CLUSTER HEALTH/SETTINGS = SKIP: standalone node. = CHECKING HYPER-CONVERGED CEPH STATUS = SKIP: no hyper-converged ceph setup detected! = CHECKING CONFIGURED STORAGES = PASS: storage 'data2' enabled and active. PASS: storage 'local' enabled and active. PASS: storage 'local-lvm' enabled and active. INFO: Checking storage content type configuration.. PASS: no storage content problems found PASS: no storage re-uses a directory for multiple content types. INFO: Check for usage of native GlusterFS storage plugin... PASS: No GlusterFS storage found. INFO: Checking whether all external RBD storages have the 'keyring' option configured SKIP: No RBD storage configured. = VIRTUAL GUEST CHECKS = INFO: Checking for running guests.. WARN: 5 running guest(s) detected - consider migrating or stopping them. INFO: Checking if LXCFS is running with FUSE3 library, if already upgraded.. SKIP: not yet upgraded, no need to check the FUSE library version LXCFS uses INFO: Checking for VirtIO devices that would change their MTU... PASS: All guest config descriptions fit in the new limit of 8 KiB INFO: Checking container configs for deprecated lxc.cgroup entries PASS: No legacy 'lxc.cgroup' keys found. INFO: Checking VM configurations for outdated machine versions PASS: All VM machine versions are recent enough = MISCELLANEOUS CHECKS = INFO: Checking common daemon services.. PASS: systemd unit 'pveproxy.service' is in state 'active' PASS: systemd unit 'pvedaemon.service' is in state 'active' PASS: systemd unit 'pvescheduler.service' is in state 'active' PASS: systemd unit 'pvestatd.service' is in state 'active' INFO: Checking for supported & active NTP service.. PASS: Detected active time synchronisation unit 'chrony.service' INFO: Checking if the local node's hostname 'pve' is resolvable.. INFO: Checking if resolved IP is configured on local node.. PASS: Resolved node IP '192.168.201.199' configured and active on single interface. INFO: Check node certificate's RSA key size PASS: Certificate 'pve-root-ca.pem' passed Debian Busters (and newer) security level for TLS connections (4096 >= 2048) PASS: Certificate 'pve-ssl.pem' passed Debian Busters (and newer) security level for TLS connections (2048 >= 2048) INFO: Checking backup retention settings.. PASS: no backup retention problems found. INFO: checking CIFS credential location.. PASS: no CIFS credentials at outdated location found. INFO: Checking permission system changes.. INFO: Checking custom role IDs PASS: no custom roles defined INFO: Checking node and guest description/note length.. PASS: All node config descriptions fit in the new limit of 64 KiB INFO: Checking if the suite for the Debian security repository is correct.. PASS: found no suite mismatch INFO: Checking for existence of NVIDIA vGPU Manager.. PASS: No NVIDIA vGPU Service found. INFO: Checking bootloader configuration... PASS: bootloader packages installed correctly INFO: Check for dkms modules... SKIP: could not get dkms status INFO: Check for legacy 'filter' or 'group' sections in /etc/pve/notifications.cfg... INFO: Check for legacy 'notification-policy' or 'notification-target' options in /etc/pve/jobs.cfg... PASS: No legacy 'notification-policy' or 'notification-target' options found! INFO: Check for LVM autoactivation settings on LVM and LVM-thin storages... NOTICE: storage 'data2' has guest volumes with autoactivation enabled PASS: all guest volumes on storage 'local-lvm' have autoactivation disabled NOTICE: Starting with PVE 9, autoactivation will be disabled for new LVM/LVM-thin guest volumes. This system has some volumes that still have autoactivation enabled. All volumes with autoactivations reside on local storage, where this normally does not cause any issues. You can run the following command to disable autoactivation for existing LVM/LVM-thin guest volumes: /usr/share/pve-manager/migrations/pve-lvm-disable-autoactivation INFO: Checking lvm config for thin_check_options... PASS: Check for correct thin_check_options passed INFO: Check space requirements for RRD migration... PASS: Enough free disk space for increased RRD metric granularity requirements, which is roughly 8.63 MiB. INFO: Checking for IPAM DB files that have not yet been migrated. PASS: No legacy IPAM DB found. PASS: No legacy MAC DB found. INFO: Checking if the legacy sysctl file '/etc/sysctl.conf' needs to be migrated to new '/etc/sysctl.d/' path. PASS: Legacy file '/etc/sysctl.conf' exists but does not contain any settings. INFO: Checking if matching CPU microcode package is installed. PASS: Found matching CPU microcode package 'intel-microcode' installed. SKIP: No containers on node detected. = SUMMARY = TOTAL: 44 PASSED: 33 SKIPPED: 6 WARNINGS: 3 FAILURES: 0 ATTENTION: Please check the output for detailed information! root@pve:~#