Hi all,
I'm new with ceph and ran into an interesting warning message which I think I can not interpret correctly and would appreciate any suggestion/comment on this, where to start rolling up the case and/or I missing something.
First of all, the configuration structured as follows:
3 node...
Hi,
I have configured a 3-node Ceph cluster.
Each node has 2 RAID controllers, 4 SSDs and 48 HDDs.
I used this syntax to create an OSD:
pveceph osd create /dev/sdd -bluestore -journal_dev /dev/sdv1
pveceph osd create /dev/sde -bluestore -journal_dev /dev/sdw1
pveceph osd create /dev/sdf...
Hello,
I have added an extra disk and added them as OSD i have a total of 3 disks per node with 3 nodes.
I am now getting the following error "too few PGs per OSD (21 < min 30)" in my ceph
Is there a way to resolve this?
I have repeating failure at installing an osd on one node.
- installing thru GUI seems to work... but the OSD is not visible
- installing thru command line seems to work... but the OSD is not visible either
eg::
# ceph-disk zap /dev/sdb
Caution: invalid backup GPT header, but valid main header...
Preface:
I have a hybrid Ceph environment using 16 SATA spinners and 2 Intel Optane NVMe PCIe cards (intended for DB and WAL). Because of enumeration issues on reboot, the NVMe cards can flip their /dev/{names}. This will cause a full cluster re balance if the /dev/{names} flip. The...
Hi Everyone,
I'm in a bit of a situation here.
We identified a bad drive (but still running) and decided we needed to remove it. Therefore we followed these instructions believing that it would work without a hitch and all our containers/vms would continue to run. Unfortunately, not the case...
Hi,
Today there was an unexpected power outage where my servers are co-located, the entire datacenter went dark. Luckily I had fresh backups to simply restore for the most part.
However, I have an issue with one OSD on one server, the OSD is stuck in "active+recovery_wait+degraded" I have...
When using bluestore osds, the backup data stops flowing, the task shows running, but no more data moves. I've swapped all the osds back to file and the backups work perfectly. Also backups stop if there is any osd using bluestore.
Syslog: only errors reported:
Sep 24 22:58:35 proxmox3...
today I added some new HDDs to our storage nodes. All HDDs are Seagate IronWolf 8TB.
As you see attached, the new HDDs are shown with a different size.
Only difference I know is I created the old OSD with cep-volume by myself, not within the GUI.
I used cep-volume because cep-disk is deprecated...
Hi,
I have created 3 OSDs on the same node.
root@ld4257:~# pveceph createosd /dev/sda -bluestore -journal_dev /dev/sdc4 -wal_dev /dev/sdc4
create OSD on /dev/sda (bluestore)
using device '/dev/sdc4' for block.db
Caution: invalid backup GPT header, but valid main header; regenerating
backup...
Hello!
I have setup (and configured) Ceph on a 3-node-cluster.
All nodes have
- 48 HDDs
- 4 SSDs
For best performance I defined any HDD as data and SSD as log.
This means I created 12 partitions on each SSD and created an OSD like this on node A:
pveceph createosd /dev/sda -journal_dev...
After getting the node up and running again (https://forum.proxmox.com/threads/it-seemed-failed-disks-prevent-proxmox-from-booting-but-it-was-actually-noapic-that-was-needed.43720/) I now have a problem that has been reported a few time elsewhere, but not in the systemd based version of Proxmox...
I am trying to add some new disks to a brand new server that is part of the cluster. When I try to add an OSD i get the following errors. This is running the very latest 5.1-51 with the very latest Ceph 12.2.4
root@virt04:~# pveceph createosd /dev/sdc
file '/etc/ceph/ceph.conf' already exists...
Hi to all,
I've an enormous problem with ceph.
This is my configuration:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx c
luster network = 10.10.10.0/30
fsid = 3f50371b-9be3-4af3-8e6d-c42e06ab3d22
keyring =...
Hello,
I have a cluster with 3 PCs, each having 3 osd's.
It works great so far, but over time (after a few hours / days) the osd's start to go down.
The cluster has been in use for about 4 weeks and roughly I lose one osd per day.
After it's down it cannot be restarted via ceph commands.
Even...
Hello,
I have a 3-node ceph cluster and Create OSD Follow this.
1. SSD 3-6 (/dev/sdc-sdh) is ok usable
2. But SSD 1-2 (/dev/sda-sdb) Tab OSD >> is Not applicable and not show osd.xx
3. And SSD 1-2 (/dev/sda-sdb) Tab Disk >> show type usage "parititions"
4. I Delete parititions on SSD...
We have a new four node cluster that is almost identical to other clusters we are running. However, since it has been up and running at what seems to be random times we end up with errors similar to:
2018-02-05 06:48:16.581002 26686 : cluster [ERR] Health check update: Possible data damage: 4...
I'm having a difficult time using a disk in my machine. It is not being used for anything, i've successfully don a sgdisk -Z on the drive and it shows that nothing is on it.
Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size...
Hi,
Recently my datancenter had some issues on VRack level affecting therefor all services depending on this. Also my cluster running proxmox was affected in special CEPH. I receive this error on one of the clusters:
root@node02-sxb-pve01:~# service ceph-osd@1 status
● ceph-osd@1.service -...
Hi,
I've just updated a 3 nodes pve 5.0 cluster with latest luminous packages.
Everything seems to be good after upgrade and reboot but on one node I have weird syslog relative to a "osd.12 service".
Oct 12 20:51:32 dc-prox-13 systemd[1]: ceph-osd@12.service: Service hold-off time over...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.