One node ProxMox, remove cluster

Oct 13, 2020
10
4
8
50
Hi,

I have a one node ProxMox cluster on 6.4-14. Now I started the process of upgrading to 7.x, but the pve6to7 script threw some errors related to CEPH storage:
  • OSD count 0 < osd_pool_default_size 3
  • pg 1.0 is stuck inactive for 14m, current state unknown, last acting []
Since this is an one node machine, and I'm not learning about CEPH anyway and I'm not planning to add more nodes (this is pretty much my lab/dev machine), I don't think I need to have CEPH at all.

I guess at one time in the past I created a cluster in the web gui (I vaguely remember it). I was wondering if it would make sense to remove the clusterization (if that is a word), I kind of assume that was what activated CEPH? And then upgrade?
 
hi,

assume that was what activated CEPH
no, cluster does not activate ceph.

if you don't want the ceph, you can just disable/remove it from your /etc/pve/storage.cfg file.

was wondering if it would make sense to remove the clusterization (if that is a word)
you shouldn't have to remove it for the upgrade.

disabling the ceph storage should get rid of the errors outputted by the pve6to7 script.

hope this helps!
 
  • Like
Reactions: littlebighuman
This is what my storage.cfg has:

dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images


No mention of CEPH?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!