Hi,
I have completed setup of 6 node cluster running PVE and Ceph.
This is my ceph configuration:
root@ld3955:~# more /etc/pve/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network =...
Restarting osd.76 fixed the issue.
Now, ceph health detail does not report this again.
root@ld3955:~# ceph health detail
HEALTH_WARN 2 pools have many more objects per pg than average; clock skew detected on mon.ld5506
MANY_OBJECTS_PER_PG 2 pools have many more objects per pg than average...
Well, I did not wait until all OSDs have been green in WebUI before rebooting another node.
What do you mean by "scaling problem in cluster"?
I don't think there's an issue with the usage, though.
root@ld3955:~# ceph -s
cluster:
id: 6b1b5117-6e08-4843-93d6-2da3cf8a6bae
health...
Hi,
I had trouble with my ceph cluster after rebooting the nodes sequentially.
This is fixed in the meantime, however there's an error message when executing ceph health detail:
root@ld3955:~# ceph health detail
HEALTH_WARN 2 pools have many more objects per pg than average; Reduced data...
I have modified crush map and ceph cluster runs stable again.
Please check the attached document for this crush map; if you don't mind please comment on this crush map in case there's an error.
Now, there's only one issue, but this is related to "unknown pgs" and I will open another thread for...
This "something wrong with Ceph" is identified:
Crush Map was resetted to some kind of default.
But this results in a faulty Ceph Cluster where device classes and rules are expected to be used in the crush map.
Update:
This crush map does not reflect the device classes.
Therefor it must be customized.
I did this already before, therefore my question is:
How can the crush map be "resetted" after cluster node reboot? Why does this happen?
Actually I defined device classes.
The output looks strange to me:
root@ld3955:~# ceph osd crush tree --show-shadow
ID CLASS WEIGHT TYPE NAME
-52 nvme 0 root hdd~nvme
-60 nvme 0 host ld5505-hdd~nvme
-58 nvme 0 host ld5506-hdd~nvme
-56 nvme 0...
Hi,
after creating MDS and CephFS manually in my cluster, I want to create a storage of type cephfs.
However this fails with error:
error with cfs lock 'file-storage_cfg': mount error: exit code 2
This is the complete output:
root@ld3955:~# pvesm add cephfs pve_cephfs
mount error 2 = No such...
Hello!
I have successfully setup a PVE cluster with Ceph.
After creating ceph pools and related RBD storage I moved the VM's drive to this newly created RBD storage.
Due to some issues I needed to reboot all cluster nodes one after the other.
Since then the PVE storage reports that all RBD is...
Hi,
I'm running PVE cluster on 6 nodes.
In total 2 different server models are used, but all are from Lenovo.
In the server configuration I can define 3 types of server timeouts:
OS Watchdog
Loader Watchdog
Enable Power Off Delay
I read here that by default all hardware watchdog modules are...
Right.
I managed to add all nodes to the cluster successfully using this command:
pvecm add ld3955-corosync1 --ring0_addr 172.16.0.x --ring1_addr 172.16.1.x
root@ld3955:~# pvecm status
Quorum information
------------------
Date: Wed May 22 14:53:39 2019
Quorum provider...
Hi,
I want to setup a multinode HA cluster.
I've completed OS and PVE installation and configured separate networks; this results in the following /etc/hosts:
root@ld3955:~# more /etc/hosts
127.0.0.1 localhost.localdomain localhost
10.97.206.91 ld3955.example.com ld3955
# The...
Hello!
I have a question regarding the content of Grub Menu vs. /boot/grub/grub.cfg.
In my case the content is inconsistent.
And this is causing an issue with booting a BTRFS snapshot, because the
required snapshot won't boot with the options displayed in Grub Menu.
I'm running these software...
I never said that I want to use the same storage for backup, Disk-Image and Container.
This makes no sense at all.
But I want to create a storage of type rbd to be used for backup.
This would create another rbd in a specific pool that is only used for backups.
Hm... I don't fully understand your response, but maybe my question was not clear.
My use case is this:
Running PVE + Ceph Cluster I want to store backups in a RBD.
Creating a RBD with PVE only allows me to select storage content Disk-Image and Container.
As a workaround I created a RBD...
Hello!
Can you please share some information why storage type rbd is only availabel for Disk-Image and Container?
I would prefer to dump a backup to another rbd.
THX
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.