I reinstalled one of the three proxmox ceph nodes with new name, new ip.
I removed all lvm data, wiped fs of the old disks with:
dmsetup remove_all
wipefs -af /dev/sda
ceph-volume lvm zap /dev/sda
Now when I create osds via gui or cli they are always filestore and I dont get it, default should...
Hi I have a worst case,
osd's in a 3 node cluster each 4 nvme's won't start
we had a ip config change in public network, and mon's died so we managed mon's to come back with new ip's.
corosync on 2 rings is fine,
all 3 mon's are up
osd's won't start
how to get back to the pool, already...
Hallo,
ich betreibe aktuell Proxmox 6.2-1 mit 3 Nodes.
Alle Nodes sind identisch ausgerüstet und beherbergen Ceph.
Wenn ich bei einem der 3 Nodes auf Ceph > OSD gehe, werden mir die vorhandenen OSD nicht immer angezeigt.
Teilweise muss die Ansicht mehrfach aktualisiert werden, bis ich sie...
Hallo,
ich betreibe aktuell Proxmox 6.2-1 mit 3 Nodes.
Alle Nodes sind identisch ausgerüstet und beherbergen Ceph.
Bei einem der 3 Nodes habe ich das Problem, dass ich kein OSD erstellen kann.
Nachdem ich die Disk ausgewählt habe und auf "Create" klicke, erscheint für einen kurzen Augenblick...
Hey Guys,
I am having an issue when creating an OSD... Not sure why though... What would cause the below error on creating the osd?
command 'ceph-volume lvm create --cluster-fsid 248fab2c-bd08-43fb-a562-08144c019785 --data /dev/sdd' failed: exit code 1
Hello!
I've been playing with ceph and after initially getting it working, I realised I had everything on the wrong network. So I deleted and removed everything, uninstalled and reinstalled ceph and began recreating the cluster using the correct network. I had to destroy / zap some drives on...
Hallo,
wir evaluieren derzeit eine Umstellung auf Proxmox im Enterprise-Bereich. Eine Verschlüsselung sämtlicher sensitiven Daten ist für uns unumgänglich. Ja, cold boot attacks sind uns bekannt, allerdings wollen wir die Hürden weiter erhöhen, im Falle eines physischen Diebstahls der Server...
Hi All,
I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks
7x 6TB 7200 Enterprise SAS HDD
2x 3TB Enterprise SAS SSD
2x 400GB Enterprise SATA SSD
This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...
Hi, I have 5 Proxmox node + 5 osd + 5 monitor + 5 manager (all node same lineup but different HDD size and different hardware), lastly added 5., and I see this:
Ceph settings:
I have only 1 pool, for all VM:
I sometimes disable 2 Proxmox node and only 3 Proxmox node available.
You...
Hi!
Please help me understand for be clear:
Example for sure, all works fine, Ceph health OK, VMs are stored in Ceph, by default only node1 running a VMs:
node1: 1000 GB OSD (1 HDD)
node2: 1000 GB OSD (1 HDD)
node3: 1000 GB OSD (1 HDD)
node4: 500 GB OSD (1 HDD)
node5: 500 GB + 500 GB OSD (2...
During a disaster "test" (randomly removing drives), I pulled one osd and one drive that had the journals for 4 other OSDs. So I assumed this would down/out 4 possibly 5 OSDs. Upon re-inserting the drives they were given different /dev/sd[a-z] labels, now the journal disk has the SD[a-z] label...
Hi,
I removed one node from the cluster using 'pvecm delnode node_name"'. (before executing this, I removed the node from monitor and manager list and shut it down)
Now, the OSDs are in down/out status and I am unable to remove it from GUI (since the node removed already).
How can I remove...
Can not create OSD for ceph.
Same error in GUI and terminal:
# pveceph osd create /dev/nvme0n1
Error: any valid prefix is expected rather than "".
command '/sbin/ip address show to '' up' failed: exit code 1
The only thing i can think of is since last time it worked was that i now have two...
Hello,
I'm trying out ceph to see if it would fit our need. In my testing I tried removing all OSDs to change the disk to SSDs. The issue I'm facing is when trying to delete the last OSD I get hit with an error "no such OSD" (see attached screenshot). The command line return no OSD so it's like...
Hallo,
ich betreibe aktuell Proxmox 5.4-13 mit 3 Nodes
Alle nodes sind identisch ausgerüstet und beherbergen auch CEPH.
Jede Node hat aktuell 4 OSDs auf jeweils einer 1 TB SSD.
Ceph läuft 3/2
PlacementGroups habe ich auf 896 schrittweise von 256 hochgeschraubt.
Hier hat mich leider die...
I ran the updates which installed a new kernel. after the reboot the monitor did not start. Attempted to start from command line:
systemctl status ceph-mon@proxp01.service
● ceph-mon@proxp01.service - Ceph cluster monitor daemon
Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled...
Hello Guys!
I have a big question for the ceph cluster and I need your help or your opinion.
I installed a simple 3 nodes setup with Ceph.
In one node has 2x146 GB HW RAID 1 + 18x 600 GB 10k SAS without RAID.
(Summary we have 54 OSD device and we have to buy 3 SSD for journal)
And my big...
Hi,
here I describe 1 of the 2 major issues I'm currently facing in my 8 node ceph cluster (2x MDS, 6x ODS).
The issue is that I cannot start any virtual machine KVM or container LXC; the boot process just hangs after a few seconds.
All these KVMs and LXCs have in common that their virtual...
hi all.
i need help in creating osd in my partition.
in our server, we are provided with 2 nvme drive in raid-1. this is the partition:
root@XXXXXXXX:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 1.8T 0 disk
├─nvme1n1p1 259:1 0 511M 0 part...
Hi,
after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size.
Example:
ceph osd crush set osd.<id> <weight> root=default host=<hostname>
Question:
How is the weight defined depending on disk size?
Which algorithm can be...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.