I'm new here, so thanks in advance for your patiance. I just want to start a new project and after years of using ESXi in the past, I want to try something new. Currently, my IT career does not give me a lot of time to tinker with hardware and software but pushes me to be in the...
ich nutze Proxmox seit Version ~4 und muss sagen ich bin immer noch begeistert.
Derzeit habe ich einen etwas angeschlagenen 3 Node Cluster mit Version 6.3 den ich ersetzen möchte.
Und hierzu habe ich einige Fragen da ich anscheinen beim Setup bzw. der CEPH Konfiguration etwas...
we observe major performance issues while running fstrim on VMs backed by a SSD pool (3 replica, 50OSDs) with Ceph (16.2.7) on proxmox. We have a workload that leads to some bigger data fluctuation on our VMs (CentOS 7) . Therefore we have enabled the discard mode for the disks and run...
We have a cluster of 9 servers with hdd ceph pool.
We have Recently purchased SSD disks for our SSD pool
Now the situation is
That when we need to create a new crush rule for ssd:
ceph osd crush rule create-replicated replicated-ssd defaulthost ssd
But getting the following...
yesterday i executed a large delete operation on the ceph-fs pool (around 2 TB of data)
the operation ended withing few seconds successful (without any noticeable errors).
and then the following problem occurred:
7 out of 32 osds went to down and out.
trying to set them in and...
I'm playing with a small home lab. I would like to use ceph to ensure that I have replicated vm's for reduncancy.
I have 2 servers , each with 1 HDD.
The first server (HP microserver) with 16gb ram has been running Proxmox for years, hand cranked from Debian running LVM on a 500G SATA...
After an upgrade, Proxmox would not start and I had to reinstall it completely.
I made a backup of the config but presumably missed something : ceph.mon keeps crashing and 4 OSDs appear as ghosts (out/down).
proxmox version : 7.2-3
ceph version : 15.2.16
Any help appreciated !
So I just joined a startup as their sysadmin, my role is to build the server from scratch both hardware and software. My experience is setting up a single node Proxmox homelab with things like OMV, Emby, nGinx Reverse Proxy, Guacamole, .... so I'm a noob but I'm learning.
The use-case is...
I'm new to the Proxmox world and I was wondering what would be the optimal way to build the Cluster network for CEPH for my type of setup.
I do have some constraints and considerations noted below,
4 identical Hosts
128 total cores of AMD EPYC
I have a 3 node cluster running with ceph. Each server has the internal 1gbe nic for management and a 10gbe nic for the ceph/vm network.
It was all generated through the gui.
For some reason all my ceph traffic is going through my 1gbe nics instead of my 10gbe nics so migrations are...
Ich habe folgendes Problem, es wurde ein externer Ceph-Datastore (nicht der Ceph der in Proxmox integriert ist) dem PBS hinzugefügt.
Der PBS wurde dem Proxmox Cluster hinzugefügt, und die dort ersten manuellen Backups waren erfolgreich und auch in der Geschwindigkeit völlig ok...
I've got a small problem - I wanted to swap servers in my Proxmox 7.1 cluster. Removing node pve002 and adding the new pve005 was working fine. Ceph was healthy.
But now I try to shutdown pve004 and set the last nvme to out there, I get 19 PGs in inactive status because the new osd.5 in pve005...
We had problems adding disks as new Ceph OSDs
pveceph createosd /dev/sdX
Error was: command '/sbin/ip address show to '192.168.1.201/24 192.168.1.202/24' up' failed: exit code 1
The workaround was to teporarily deactivate the 2nd Cluster IP in /etc/pve/ceph.conf
I want multiple Proxmox clusters to access the same Ceph RBD storage. IMHO this should be possible, but without manually choosing VM IDs in both clusters there would be collisions for disk names. Are there options to avoid this with either defining a custom disk name template (preferred) or...
Updated a Ceph cluster to PVE 7.2 without any issues.
I've just noticed I'm using the wrong network/subnet for the Ceph public, private and Corosync networks.
It seems my searching skills are failing me on how to re-IP Ceph & Corosync networks.
Any URLs to research this issue?
I have a 3-node cluster (e.g. server1, server2, server3) located somewhere at DC1 with ceph and HA configured.
Now I'm adding 3 more nodes (e.g. server4, server5, server6) at DC2 and creating a 6-node cluster.
My question is: How do I let set up ceph to always have a copy at another DC's node...
i want to change the ceph config and use the bridge instead of the vlans I have created. But when I change the ceph.conf file with the IP network from bridge, my ceph doesn't work anymore. Any suggestions how I can do that?
My main issue is that I am not getting enough read/write on the...
Running a reasonably small proxmox and ceph deployment, four servers split across two buildings in our school, then a fifth witness server (which doesn't host any VMs/storage) in a third location.
In order to sustain things in the event of failure, we run 4/2 copies on ceph, so any two...
Can anyone point me in the right direction to fix this?
root@node2:/var/lib/ceph/osd/ceph-1# ceph-osd --check-wants-journal
2022-04-14T20:55:43.210-0500 7f71cd669f00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-admin/keyring: (2) No such file or...