Hi!
I'm currently using 3-node PVE cluster with CEPH pool based on HDD (80Tb total).
Now I added 2 SSDs to each node and want to create second separate pool (10Tb total).
I read https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_device_classes but some things are not clear to me...
We've been using a traditional SAN with iscsi for over 10 years, it has been ultra reliable.
Now looking at ceph and have built a 3-server ceph cluster with Dell R740xds.
The device has six interfaces, three to one switch, three to another.
One port is public internet
One port is public ceph...
Hello!
I have been messing around with ceph to see if it will properly augment my NAS in a small subset of tasks, but I noticed if a disk is removed and put back in the ceph cluster doesn't detect that until reboot. This is because it is defined using the /dev/sdX format instead of the...
BACKGROUND
I work for a small business that provides a 24 hour service on our servers and requires as close to 100% uptime as possible. Our old IT company sold us 2 identical Dell R420 servers several years ago with a single 6 core processor, 4x 3.5" 600GB 10K SAS HDD in RAID10, and 16GB RAM and...
Hallo,
wir haben nun seit einiger Zeit ein Proxmox Cluster am laufen. Wir betreiben 4 Server mit 128 x AMD EPYC 7543 32-Core Processor (2 Sockets), 512GB RAM und Ceph mit insgesamt 16 OSD´s.
Bei der Performance von Linux Servern ist alles optimal. Hier haben wir keine Probleme mit...
Hi,
yesterday one OSD went down and dropped out of my cluster, systemd stopped the service after it "crashed" 4 times. I tried restarting the OSD manually, but it continues to crash immediately, the OSD looks effectively dead.
Here's the first ceph crash info (the later ones look the same):
{...
Hello,
I'm new to Proxmox. So, if any term is wrong help me out.
I have a HPE Proliant DL380 G6 with 7 disks at my lab. I have installed Proxmox on two of the disks with RAID 10. I have left the other ones out of the RAID as I want to test Ceph for my environment.
The other five disks are not...
On an old installation I had PVE and CEPH in the same network. To improve performance and security, I'm currently separating networks more strict.
The first step I'm trying is to separate the cluster network from the PVE network. I was following the...
Now I don't know where to ask and post this but I am trying to get my Ceph RBD image mounted again on a new Windows OS install
Due to unfortunate disk errors (both SSDs died at same time) I cannot access my previously working Ceph config and keyring for Windows. I did have it working quite...
Hello,
I have problems with various VMS that seems do not release unused space.
Here is the VM Config:
Here is the fstab:
Here is the LVM config:
Here is the Filesystem free disk space:
I've also issued fstrim manually after poweroff/poweron the VM:
But when I investigate du on...
I have been running a small (3-node) homelab proxmox cluster with CEPH for almost 5 months.
It has been running great and I learned a lot on many levels.
So far so good, but yesterday I noticed that two of the three 1TB Samsung SSD Pro NVME drives were in degraded mode because of percentage...
Hey dear Proxmox Pros,
I have an issue with my Ceph Cluster after the attempt of migrating it to a dedicated network.
I have a 3 node Proxmox Cluster with Ceph enabled. Since I only had one network connection on each of the nodes, I wanted to create a dedicated and separate network only for the...
Hi All
Is there any recommendations for configuring pools with differing goals?
I am setting two pools with opposite goals. A fast NVMe pool for VM's and a slow cephfs using spinning drives. Considering the different objectives, i was thinking of setting up two ceph clusters. Both would be...
Hi All
I have a 3 node cluster with ceph storage. I want to configure the ceph cluster to work with only one node. (in the even of a double failure)
Currently the ceph continues working correctly with the failure of one node, but as soon as two nodes are down, ceph becomes unavailable and you...
I'm trying to do a two ceph pools with 3 nodes (Site A) and 3 nodes (Site B) with two different rules. The first rule is a stretched Ceph with Site A and Site B and the second rule is just a standalone ceph rule on Site A.
Each of the 6 nodes has 3 OSDs. So total there's 18 OSDs.
I'm...
Hello, I am trying to create a high availability Proxmox cluster with three nodes and three SSD's however I can't seem to get it working. The monitoring on two of the three nodes isn't working and I cannot actually create the VM on the share
I'm operating a Dell PowerEdge R740 server as a Ceph node. The current system boot configuration utilizes a single SAS SSD for the OS drive. I'm planning to migrate to an Dell BOSS M.2 SATA drive via PCIe adapter (two drives as RAID 1) (https://www.ebay.com/itm/296760576190) to optimize the...
Hey everyone,
I’m running into a serious issue while trying to install Ceph Reef (Windows x64) on my Windows laptop. The installation seems to go smoothly until it prompts me to restart the PC. However, after the restart, my computer goes into automatic repair mode, and I can't seem to get it...
I have 3 HPE Proliant DL360 Gen10 servers that I would like to configure as a 3 node Proxmox cluster. Each node has 6 x 2TB data drives, for a total of 12TB per node + 2 system drives in mirror. Most configurations I have seen relay on an external NAS as the shared storage space. I am specially...
Hi Community!
The recently released Ceph 19.2 Squid is now available on the Proxmox Ceph test and no-subscription repositories to install or upgrade.
Upgrades from Reef to Squid:
You can find the upgrade how-to here: https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid
New Installation of Reef...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.