Issues with cephfs preventing upgrade from 6 to 7

sean.ppaul

New Member
Jun 10, 2022
2
0
1
I'm a noob to whole virtualization and I have a tiny Lenovo m93 box (4 core 8GB) loaded with proxmox 6. Now when I try to update to version 7 using pve6to7 --full I get an error related to cephfs.

Code:
= CHECKING HYPER-CONVERGED CEPH STATUS =

INFO: hyper-converged ceph setup detected!
INFO: getting Ceph status/health information..
FAIL: failed to get 'noout' flag status - got timeout

FAIL: unable to determine Ceph status!
INFO: getting Ceph daemon versions..
FAIL: unable to determine Ceph daemon versions!
WARN: 'noout' flag not set - recommended to prevent rebalancing during cluster-wide upgrades.
INFO: checking Ceph config..

Full command output is...

Code:
root@pve:~# pve6to7 --full
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
WARN: updates for the following packages are available:
  python-cephfs, python-ceph-argparse

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 6.4-1

Checking running kernel version..
PASS: expected running kernel '5.4.189-1-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

INFO: hyper-converged ceph setup detected!
INFO: getting Ceph status/health information..
FAIL: failed to get 'noout' flag status - got timeout

FAIL: unable to determine Ceph status!
INFO: getting Ceph daemon versions..
FAIL: unable to determine Ceph daemon versions!
WARN: 'noout' flag not set - recommended to prevent rebalancing during cluster-wide upgrades.
INFO: checking Ceph config..

= CHECKING CONFIGURED STORAGES =

PASS: storage 'local' enabled and active.
PASS: storage 'local-lvm' enabled and active.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if the local node's hostname 'pve' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.1.224' configured and active on single interface.
INFO: Checking backup retention settings..
INFO: storage 'local' - no backup retention settings defined - by default, PVE 7.x will no longer keep only the last backup, but all backups
PASS: no problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking custom roles for pool permissions..
INFO: Checking node and guest description/note legnth..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking storage content type configuration..
PASS: no problems found
INFO: Checking if the suite for the Debian security repository is correct..
INFO: Make sure to change the suite of the Debian security repository from 'buster/updates' to 'bullseye-security' - in /etc/apt/sources.list:6
SKIP: No containers on node detected.

= SUMMARY =

TOTAL:    22
PASSED:   15
SKIPPED:  2
WARNINGS: 2
FAILURES: 3

ATTENTION: Please check the output for detailed information!
Try to solve the problems one at a time and then run this checklist tool again.

I tried to make sure there are no pools or OSDs to be deleted and even ran purge to remove. Nothing gets me past this issue. All my ceph related commands end up in timeout.

Any help is getting past this and moving to 7 is appreciated.
 
Last edited:
is there ceph running at all? AFAICS you don't even have a cluster? is there a /etc/pve/ceph.cfg ?
 
Nope, no /etc/pve/ceph.cfg file on disk. Anyways I stopped trying the upgrade and did a new full install and called it a day. Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!