Hey folks and hope you are all doing great,
After enormous try and fail attempts; previous post; I have managed to make my 3 Raspberry CM4 boards orchestrating together in a Cluster via Proxmox 7.
The issue that I am trying to tackle now, is how to modify the default bridge network (vmbr0)...
I'm having trouble migrating VMs when shutting down a node (failover doesn't work).
It is a 3 node cluster (Dell 2x R710 y R510) with Proxmox 8.0.3 and ceph version 17.2.6 quincy.
VM test to migrate:
/etc/pve/101.conf: No such file or directory
root@svr1:/etc/pve# cat...
Dear experts and professors, do you have any good solutions to solve this problem?
this is my pve , HA is enable ,and some virtual PC is include in HA
one day ,my ceph switch down ,
and then all of pve node restart after 。
This is a production environment ,we want in this case ,all of pve...
On a Proxmox 8.0.4 node, I have the following in /etc/apt/sources.list.d/ceph.list:
deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription
I executed apt update, then run pveceph install
But I always received the following message:
WARN: Enterprise repository selected...
Hi ! Please help me understand what happen. I have pve 7 and ceph version 17.2.5. Created ceph fs and try to mount it on my centos 8 fresh installed server. But there is an error in cli:
mount -t ceph 10.20.0.120:/cephfs /mnt/cephfs -o name=user_test,secret=%ANY-SECRET%
mount: /mnt/cephfs...
Hi, I just did a hard disk swap and all the osds on a node is not able to start with the service
`systemctl start ceph-osd@0`
the output of systemctl status ceph-osd@0 is
ceph-osd@0.service - Ceph object storage daemon osd.0
Loaded: loaded (/lib/systemd/system/ceph-osd@.service...
I have configured 3 node proxmox with equal number of OSDs and storage. The storage use keeps increasing despite of the allocated space. The used percentage grows from 99 percent to 100 percent. Why is this happening?
My ceph cluster has 3 3TB and 3 1TB drives with SSD WAL and DB. The write speeds are kinda meh on my VMs, from what I understand the 3 TB drives will get 3x the write request of the 1TB drives. Is my understanding correct ? and would it be better if I swapped my 3TB with 1TB drives making it 2...
Hi.
I have a question regarding the upgrade process for Proxmox VE in combination with Ceph.
Currently, my Proxmox VE setup is running version 7, and I also have Ceph installed with version 15.2.17 (Octopus). I am planning to upgrade Proxmox VE to version 8, as per the official upgrade...
Hello, I have a ceph pool "SSD_POOL" and I can't delete unused images inside it.
Has anyone gone through something similar?
I'm trying to remove, for example, the vm-103-disk-0 image
Hi
We are deploying cloudinit images from terraform (telmate/proxmox), by cloning a template in proxmox already configured with cloud init.
The new machine gets created with the next available VMID, and a small 4MBdisk is created to feed the cloudinit settings. The disk created for cloudinit is...
Hi all,
we needed to replace a drive caddy (long story) for a running drive, on a proxmox cluster running ceph (15.2.17). THe drives themselves are hot swappable. First i stopped the OSD, pulled out the drive, changed the caddy, refitted the (same) drive. The drive quickly showed up in proxmox...
Hello,
maybe often diskussed but also question from me too:
since we have our ceph cluster we can see an unweighted usage of all osd's.
4 nodes with 7x1TB SSDs (1HE, no space left)
3 nodes with 8X1TB SSDs (2HE, some space left)
= 52 SSDs
pve 7.2-11
all ceph-nodes showing us the same like...
Hi collegues,
i would like to ask you about migration speed between PVE cluster nodes.
I have a 3-node PVE 8 cluster with 2x40G network links: one for CEPH cluster (1) and another one for PVE cluster/CEPH public network (2).
CEPH OSDs is all-nvme.
In cluster options i've set also one of these...
Dear Proxmoy-Experts,
Since some days now, the performance of every machine and container in my cluster is extremely slow.
Here some general Info of my setup
I am running a 3 node proxmox-cluster with up-to-date packages.
All three cluster nodes are almost identical in heir hardware specs...
Hi Community!
The recently released Ceph 18.2 Reef is now available on all Proxmox Ceph repositories to install or upgrade.
Upgrades from Quincy to Reef:
You can find the upgrade how to here: https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef
New Installation of Reef:
Use the updated ceph...
Dear community,
the HDD pool on our 3 node Ceph cluster was quite slow, so we recreated the OSDs with block.db on NVMe drives (Enterprise, Samsung PM983/PM9A3).
The sizing recommendations in the Ceph documentation recommend 4% to 6% of 'block' size:
block.db is either 3.43% or around 6%...
Guten Morgen zusammen,
wir hatten letzte Woche einen sehr seltsamen Fall. Wir betreiben seit mehreren Jahren ein 10 Node PVE Cluster (inkl. CEPH) und hatten bis jetzt noch nie nennenswerte Probleme. Das Cluster läuft extrem stabil und wir sind sehr zufrieden. Aber: Wir haben letzte Woche das...
Hi,
I am after some advice on the best way to expand our ceph pool. Some steps have already been undertaken, but I need to pause until I understand what to do next.
Initially we had a proxmox ceph cluster with 4 nodes each with 4 x 1TB SSD OSD. I have since added a 5th node with 6 x 1TB SSD...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.