Ceph errors on brand new install

maverickws

Member
Jun 8, 2020
50
2
8
Hello all,
As I posted earlier I've been evaluating Proxmox.

Apart from an issue with the Links for Clustering posted on another thread, another issue I'm experiencing is that I frequently I get this warning "got timeout (500)" and a spining wheel and apparent connectivity issues between the nodes, which are linked by a dedicated vSwitch that has no connectivity issues.
This is a brand new install, each machine has 2x2TB drives (one to be assigned as Ceph OSD).

Started Ceph Cluster with 1st node, it created the monitor and manager automatically without issues.
Tried to add Ceph Monitors/Managers on the second node:

ERROR: Got timeout
v2:
ERROR: monitor address '10.1.49.2' already in use (500)

Address is in use by a failed monitor. As I mentioned this was a brand new install, this machine had zero configs on it aside from adding to Cluster.

Code:
# systemctl status ceph-{mon,mgr}@proxmox-02
● ceph-mon@proxmox-02.service - Ceph cluster monitor daemon
   Loaded: loaded (/lib/systemd/system/ceph-mon@.service; disabled; vendor preset: enabled)
  Drop-In: /usr/lib/systemd/system/ceph-mon@.service.d
           └─ceph-after-pve-cluster.conf
   Active: inactive (dead)

● ceph-mgr@proxmox-02.service - Ceph cluster manager daemon
   Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; disabled; vendor preset: enabled)
  Drop-In: /usr/lib/systemd/system/ceph-mgr@.service.d
           └─ceph-after-pve-cluster.conf
   Active: inactive (dead)

Create OSD on second node. OSD shows as down/out.
Node 1 says OSD Type bluestore, node 2 says OSD type filestore ?

Ceph Log:

Code:
2020-06-15 20:11:37.944 7f2702727280  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-06-15 20:11:37.944 7f2702727280  0 ceph version 14.2.9 (bed944f8c45b9c98485e99b70e11bbcec6f6659a) nautilus (stable), process ceph-mon, pid 13366
2020-06-15 20:11:37.944 7f2702727280 -1 monitor data directory at '/var/lib/ceph/mon/ceph-proxmox-02' does not exist: have you run 'mkfs'?
2020-06-15 20:11:48.044 7f73fa13c280  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-06-15 20:11:48.044 7f73fa13c280  0 ceph version 14.2.9 (bed944f8c45b9c98485e99b70e11bbcec6f6659a) nautilus (stable), process ceph-mon, pid 13523
2020-06-15 20:11:48.044 7f73fa13c280 -1 monitor data directory at '/var/lib/ceph/mon/ceph-proxmox-02' does not exist: have you run 'mkfs'?
2020-06-15 20:11:58.296 7f04b4050280  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-06-15 20:11:58.296 7f04b4050280  0 ceph version 14.2.9 (bed944f8c45b9c98485e99b70e11bbcec6f6659a) nautilus (stable), process ceph-mon, pid 13676
2020-06-15 20:11:58.296 7f04b4050280 -1 monitor data directory at '/var/lib/ceph/mon/ceph-proxmox-02' does not exist: have you run 'mkfs'?
2020-06-15 20:12:08.536 7f0b330a0280  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-06-15 20:12:08.536 7f0b330a0280  0 ceph version 14.2.9 (bed944f8c45b9c98485e99b70e11bbcec6f6659a) nautilus (stable), process ceph-mon, pid 13772
2020-06-15 20:12:08.536 7f0b330a0280 -1 monitor data directory at '/var/lib/ceph/mon/ceph-proxmox-02' does not exist: have you run 'mkfs'?
2020-06-15 20:12:18.796 7f92fd4bc280  0 set uid:gid to 64045:64045 (ceph:ceph)
2020-06-15 20:12:18.796 7f92fd4bc280  0 ceph version 14.2.9 (bed944f8c45b9c98485e99b70e11bbcec6f6659a) nautilus (stable), process ceph-mon, pid 13899
2020-06-15 20:12:18.796 7f92fd4bc280 -1 monitor data directory at '/var/lib/ceph/mon/ceph-proxmox-02' does not exist: have you run 'mkfs'?

Run mkfs where? what? what is this error?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!