This is my error message
Ceph version is 17.2.6
Ceph version is 8.0.3
root@zmc-pve10:~# systemctl status email@example.com
× firstname.lastname@example.org - Ceph cluster manager daemon
Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; enabled; preset: enabled)
In the company we have decided to migrate our applications from a public cloud provider to our own solution based on proxmox ha cluster. Since this is the first such installation I have a great request for an opinion on the approach we have chosen for this.
Hey folks and hope you are all doing great,
After enormous try and fail attempts; previous post; I have managed to make my 3 Raspberry CM4 boards orchestrating together in a Cluster via Proxmox 7.
The issue that I am trying to tackle now, is how to modify the default bridge network (vmbr0)...
I'm having trouble migrating VMs when shutting down a node (failover doesn't work).
It is a 3 node cluster (Dell 2x R710 y R510) with Proxmox 8.0.3 and ceph version 17.2.6 quincy.
VM test to migrate:
/etc/pve/101.conf: No such file or directory
Dear experts and professors, do you have any good solutions to solve this problem？
this is my pve ， HA is enable ，and some virtual PC is include in HA
one day ，my ceph switch down ，
and then all of pve node restart after 。
This is a production environment ，we want in this case ，all of pve...
On a Proxmox 8.0.4 node, I have the following in /etc/apt/sources.list.d/ceph.list:
deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription
I executed apt update, then run pveceph install
But I always received the following message:
WARN: Enterprise repository selected...
Hi ! Please help me understand what happen. I have pve 7 and ceph version 17.2.5. Created ceph fs and try to mount it on my centos 8 fresh installed server. But there is an error in cli:
mount -t ceph 10.20.0.120:/cephfs /mnt/cephfs -o name=user_test,secret=%ANY-SECRET%
Hi, I just did a hard disk swap and all the osds on a node is not able to start with the service
`systemctl start ceph-osd@0`
the output of systemctl status ceph-osd@0 is
email@example.com - Ceph object storage daemon osd.0
Loaded: loaded (/lib/systemd/system/ceph-osd@.service...
I have configured 3 node proxmox with equal number of OSDs and storage. The storage use keeps increasing despite of the allocated space. The used percentage grows from 99 percent to 100 percent. Why is this happening?
My ceph cluster has 3 3TB and 3 1TB drives with SSD WAL and DB. The write speeds are kinda meh on my VMs, from what I understand the 3 TB drives will get 3x the write request of the 1TB drives. Is my understanding correct ? and would it be better if I swapped my 3TB with 1TB drives making it 2...
I have a question regarding the upgrade process for Proxmox VE in combination with Ceph.
Currently, my Proxmox VE setup is running version 7, and I also have Ceph installed with version 15.2.17 (Octopus). I am planning to upgrade Proxmox VE to version 8, as per the official upgrade...
We are deploying cloudinit images from terraform (telmate/proxmox), by cloning a template in proxmox already configured with cloud init.
The new machine gets created with the next available VMID, and a small 4MBdisk is created to feed the cloudinit settings. The disk created for cloudinit is...
we needed to replace a drive caddy (long story) for a running drive, on a proxmox cluster running ceph (15.2.17). THe drives themselves are hot swappable. First i stopped the OSD, pulled out the drive, changed the caddy, refitted the (same) drive. The drive quickly showed up in proxmox...
maybe often diskussed but also question from me too:
since we have our ceph cluster we can see an unweighted usage of all osd's.
4 nodes with 7x1TB SSDs (1HE, no space left)
3 nodes with 8X1TB SSDs (2HE, some space left)
= 52 SSDs
all ceph-nodes showing us the same like...
i would like to ask you about migration speed between PVE cluster nodes.
I have a 3-node PVE 8 cluster with 2x40G network links: one for CEPH cluster (1) and another one for PVE cluster/CEPH public network (2).
CEPH OSDs is all-nvme.
In cluster options i've set also one of these...
Since some days now, the performance of every machine and container in my cluster is extremely slow.
Here some general Info of my setup
I am running a 3 node proxmox-cluster with up-to-date packages.
All three cluster nodes are almost identical in heir hardware specs...
The recently released Ceph 18.2 Reef is now available on all Proxmox Ceph repositories to install or upgrade.
Upgrades from Quincy to Reef:
You can find the upgrade how to here: https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef
New Installation of Reef:
Use the updated ceph...
the HDD pool on our 3 node Ceph cluster was quite slow, so we recreated the OSDs with block.db on NVMe drives (Enterprise, Samsung PM983/PM9A3).
The sizing recommendations in the Ceph documentation recommend 4% to 6% of 'block' size:
block.db is either 3.43% or around 6%...