Hello,
I am testing to install Proxmox5 on a OVH dedicated server ( EG - 64G E5-1650v3 )
I have 2 SSD disks.
I have issues on creating OSD, as I get a 100MB partition.
- ceph version 12.1.1
I already tried to zap the disk with both:
- ceph-disk zap /dev/nvme1n1
or (as found in...
Hi,
We are using 3 nodes CEPH cluster. One of my node died. I rebuild and added node back to cluster. Cluster is working fine.
Now I can't create OSDs.
OSD create successfully but showing as partition not as OSD.x under disk.
I have removed all data and cleaned partition using following...
Hi,
I've some issues with ceph cluster installation.
I've 3 physical servers where ceph is installed on each node. On this nodes there is 3 SAS disks and several NIC 10Gbps. One disk has 300GB, where is installed proxmox packages, the other disks have 1TB, available for my osds. But when i...
Hi,
My issue is very strange..
I have one cluster of 3 physical servers where is installed Proxmox VE 4.4. I installed on each nodes ceph with 'pveceph install -version hammer' command. I have 3 monitors on the different nodes in my cluster.
Each servers have 1 disk of 1TB and an other where...
Hi,
I have 3 nodes Proxmox Cluster with Ceph. I'm happy with current setup and performance.
Now we are planning Disaster Recovery. We are using separate NFS storage for VM backup.
I have few questions and I need expert advice.
Our Ceph pools are setup with 2 replica. We have 4 OSDs in each...
Hye guys.. i got some issue here.. Why i can't create ceph osd?? did i missed something?
when i tried to create ceph osd, the popup window on Disk column show "NO DISK UNUSED" while on Journal Disk it show "use OSD disk" . Any suggestions guys??
Hello!
I'm using ProxMox with no-subscription repository
pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.40-1-pve)
and installed Ceph via "pveceph install -version jewel"
I'm having trouble with Ceph when creating OSDs. I'm using the pveceph utility, as described in...
Hi there,
I'm trying to set up a seperated pool with SSD storage within my Ceph. I followed the instructions here (https://elkano.org/blog/ceph-sata-ssd-pools-server-editing-crushmap/), while using pveceph whereever possible.
It works in a way that everything was set up and showed fine in the...
Hallo zusammen,
ich hoffe mir kann hier jemand bei CephFS weiter helfen:
Wie oben geschrieben habe ich das Problem, dass in meinem Proxmox Cluster mit CephFS die Daten auf den OSDs sehr ungleich verteilt werden. Im folgenden das Setup:
Aktuell 4 Server (5. ist in Planung). Jeder Server hat...
Hi,
i've a 5 Node Proxmox/Ceph Cluster. The OSDs are located at 3 of those 5 Machines.
4 OSDs per Node with P3700 NVMe SSD as journal Devices.
My Question is: ceph-disk list should print the OSDs with some infos about journaling.
My journal info of each OSD is empty?! Is this ok...
GUI don't allow partitions as Ceph OSD journal devices, and it should. We do it manually, but there seems to be no reason not to handle it.
(Sidenote: the "disks" tab in the new gui is empty, probably should contain physical disks which have, say, SMART status.)
Hello everybody!
We are using five nodes in one cluster, and we use ceph for the mainstorage.
Now my question, is it possible to configure a mail notification for the status of the OSDs?
The mail should send an email if one of the OSDs reaches the warn/health status "X near full osd(s)" at 85%...
Hallo!
Wir haben Proxmox in Verbindung mit Ceph im Einsatz.
Ich bin schon lange auf der Suche, das ich gerne eine Email bekommen möchte wenn eine der OSDs den Status "near full OSD(s)" bekommt (85%), und natürlich dann auch wenn es fast zu spät ist, "full osd(s)" (95%).
Ist das irgendwie...
Dear Community,
I have been using Proxmox for many years, and our DC grew. So I decided to get CEPH implemented to have a better density on my Proxmox nodes. Currently wer are running 7 Nodes, and on 4 Nodes i have installed CEPH, ecah of thos 4 Nodes has 2 OSDs, the journal device is one SSD...
Hi, I am playing with ceph and proxmox.
I installed it several times and documented the commands to get "up and running" fast in our IT environment.
But now I have the problem, that one "create osd command" does not work anymore.
I cannot create the second "osd" at /dev/sda (/dev/sdb -->...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.