Hi all,
we needed to replace a drive caddy (long story) for a running drive, on a proxmox cluster running ceph (15.2.17). THe drives themselves are hot swappable. First i stopped the OSD, pulled out the drive, changed the caddy, refitted the (same) drive. The drive quickly showed up in proxmox...
Hello,
maybe often diskussed but also question from me too:
since we have our ceph cluster we can see an unweighted usage of all osd's.
4 nodes with 7x1TB SSDs (1HE, no space left)
3 nodes with 8X1TB SSDs (2HE, some space left)
= 52 SSDs
pve 7.2-11
all ceph-nodes showing us the same like...
Hi collegues,
i would like to ask you about migration speed between PVE cluster nodes.
I have a 3-node PVE 8 cluster with 2x40G network links: one for CEPH cluster (1) and another one for PVE cluster/CEPH public network (2).
CEPH OSDs is all-nvme.
In cluster options i've set also one of these...
Dear Proxmoy-Experts,
Since some days now, the performance of every machine and container in my cluster is extremely slow.
Here some general Info of my setup
I am running a 3 node proxmox-cluster with up-to-date packages.
All three cluster nodes are almost identical in heir hardware specs...
Hi Community!
The recently released Ceph 18.2 Reef is now available in the Proxmox test and no-subscription repositories for early adopters to install or upgrade.
Upgrades from Quincy to Reef:
You can find the upgrade how to here: https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef
New...
Dear community,
the HDD pool on our 3 node Ceph cluster was quite slow, so we recreated the OSDs with block.db on NVMe drives (Enterprise, Samsung PM983/PM9A3).
The sizing recommendations in the Ceph documentation recommend 4% to 6% of 'block' size:
block.db is either 3.43% or around 6%...
Guten Morgen zusammen,
wir hatten letzte Woche einen sehr seltsamen Fall. Wir betreiben seit mehreren Jahren ein 10 Node PVE Cluster (inkl. CEPH) und hatten bis jetzt noch nie nennenswerte Probleme. Das Cluster läuft extrem stabil und wir sind sehr zufrieden. Aber: Wir haben letzte Woche das...
Hi,
I am after some advice on the best way to expand our ceph pool. Some steps have already been undertaken, but I need to pause until I understand what to do next.
Initially we had a proxmox ceph cluster with 4 nodes each with 4 x 1TB SSD OSD. I have since added a 5th node with 6 x 1TB SSD...
Hi There!,
Has anyone used or had the experience of activating Ceph's RBD image encryption? RBD Image encryption
What I want is to have encrypted disks of some VMs. OSD encryption doesn't solve this case, as it doesn't protect against an attacker gaining access to the host.
I also had a look...
I am trying to upgrade to proxmox 8.
after finish updating all nodes to 7.4-16, (and rebooted each node after install)
and updating ceph from Pacific to Quincy
i just noticed that in the ceph Performance tab i dont see traffic (i usually have around 300-6000MBS) with 1000+ IOPS
systems are...
Hello, I am a long-time proxmox user.
We have purchased the following hardware for a new project and are about to launch a cloud computing platform at the entry stage. But I still haven't clarified the installation strategy and scenario.
Hardwares:
5 x Dell PowerEdge R630
1 x Dell Unity 600F...
Hi There
We try to use the ceph cluster for persisten storage in our local okd installation with Rook.
The Operator creates the block storage correct in the ceph pool. But pods and also local clients are not able to map the storage.
rbd ls --id=admin -m...
Hi,
I have a use case where we have fairly large PVE 7 cluster connected to an external ceph cluster. We would like to set up a second PVE cluster in the same physical location due to the current cluster is now pushing 36 hosts. The new cluster will be connected to the same external ceph...
Hi, PVE geekers:
I build a pve cluster on three server (with ceph), with pve & ceph package version as follow:
root@node01:~# pveversion
pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve)
root@node01:~# ceph --version
ceph version 16.2.13 (b81a1d7f978c8d41cf452da7af14e190542d2ee2)...
Hi there,
I am playing with Ceph and a three node cluster for learning.
I have a 4TB turnkey filestore container using ZFS storage on one node. I have been moving its volume into a new Ceph pool. This move failed the first couple of times, partly due to me not providing enough space and partly...
Hi,
I upgraded a Cluster right all the way from Proxmox 6.2/Ceph 14.x to Proxmox 8.0/Ceph 17.x (latest). Hardware is Epyc Servers, all flash / NVME. I can rule out Hardware issues. I can reproduce the issue as well.
All running fine so far, except that my whole system gehts slowed down when i...
We are organising a Proxmox VE day in the Netherlands, Ede on Thursday on the 12th of October 2023.
In the morning, we will discuss how you can innovatively meet your virtualisation, storage and private cloud needs within your budget with Proxmox VE. We will also look at how to easily achieve...
I have a three-node cluster with PVE8 and Ceph installed. The names are pfsense-1, pfsense-2 and r730. I have been running PVE for about a year and recently installed Ceph on these nodes. It worked well, but when I reboot the r730 node, it won't boot (I waited for 15 hours). I reinstalled the...
Hi guys,
Would appreciate some assistance on this as this is quite an urgent issue.
This morning when I woke up, received a call from our client and it seems like 2 NVME OSDs on their site crashed without any apparent reason. Other non NVME OSDs are running normally without any issues.
Issue...
Hi,
Ceph pacific (16.2.13) to quincy 17.2.6
During the quincy upgrade process from ceph pacific, we could not upgrade our osd disks while we were upgrading 17.2.6 in monitor, manager and meta.
Since I could not upgrade the osd disks, the osd disk versions remained at 16.2.13
Can anyone have...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.