It’s very strange. After I set up a new cluster with Proxmox 6, it shows no cluster mode, but standalone
But in the console, it shows the cluster info:
root@node2:~# pvecm status
Quorum information
------------------
Date: Sun Aug 4 13:07:26 2019
Quorum provider...
Here I got part of the log file
2018-04-04 22:06:56.387410 7fd4621e1700 4 rocksdb: [/home/builder/source/ceph-12.2.4/src/rocksdb/db/db_impl_write.cc:684] reusing log 14798 from recycle list
2018-04-04 22:06:56.387472 7fd4621e1700 4 rocksdb...
After repairing and expanding my ceph cluster, I found some weird messages I couldn’t find any information about. What do they mean?
Apr 02 11:34:00 pve1 systemd[1]: Started Proxmox VE replication runner.
Apr 02 11:34:50 pve1 ceph-osd[2485790]: 2018-04-02 11:34:50.907319 7fb084c55700 -1 osd.5...
I have a server with 8 disks and a NVMe ssd (M.2 with Delock PCI adapter) which contains boot partition and bluestore wal and db partitions. I prepared the NVMe with several partitions
and used this script to create my OSDs
#!/bin/bash
DEV=$1
BLU=$2
OSD=$3
WALPARTN=$(($OSD + 4))...
Yes, as far I can see.
No, it’s a new one
No, only these entries
No such file or directory
I just created a new pool and started to migrate my VMs. It takes some time and I will come back with hopefully good news.
Thanks for your help.
I got this
# rados list-inconsistent-pg pool
["1.25","1.d0","1.1f6"]
# rados list-inconsistent-obj 1.1f6
{"epoch":13476,"inconsistents":[]}root@pve1:~#
# rados list-inconsistent-snapset 1.1f6
{"epoch":13476,"inconsistents":[]}root@pve1:~#
I deleted the VM that contained pg 1.1f6 as I mentioned...
Yeah, I read the man pages, but it didn’t help. Although the object is listed, I cannot delete it. Also
# rados -p pool listwatchers rbd_data.124974b0dc51.0000000000001023
error listing watchers pool/rbd_data.124974b0dc51.0000000000001023: (2) No such file or directory
Thank you. Deep scrub didn’t help but I got
rados -p pool ls | grep 124974b0dc51
rbd_data.124974b0dc51.0000000000001023
I think that’s the issue. How to get rid of it?
Thank you. It would be very helpful. Is there a command to look through all the images or do I have to check one by one until I find the string?
Package versions
proxmox-ve: 5.1-32 (running kernel: 4.10.17-4-pve)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-2-pve...
Yes, the OSDs are all working. First I thought it’s a drive error but cannot see any other errors but the ones I mentioned.
I have this entries in the log file
2018-03-05 09:06:59.807135 osd.1 osd.1 192.168.0.10:6812/14280 7645 : cluster [ERR] deep-scrub 1.1f6...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.