ceph 19.2

  1. L

    Setting up a 4TB Erasure Coded CephFS for Multiple Applications

    Hello Proxmox Community, I'm looking for some guidance on the best way to set up a Ceph File System (CephFS) within my 3-node Proxmox cluster to provide storage for multiple applications: FileCloud, Paperless-ngx, and Jellyfin. FileCloud in particular needs a file system with a mountable path...
  2. T

    VMs are migrating out of Node

    I tested failover on a new HA ceph cluster all 3 nodes are online but the VM that migrated to the 2nd node from the 1st node will not migrate back to the first node and when I do a manual migration it will not stay in the first node. I removed and re created the HA group and it is still doing...
  3. N

    Ceph JumboFrames Cluster Crash - sehr kurios

    Hey, folgendes Szenario. 3 Node Cluster mit PVE-8.4 (Kernel 6.8.12-9-pve) und Ceph 19. Server von Thomas Krenn und 2x Switche Netgear M4350-24X8F8V im Stacking. Pro Node 4x10G LACP Hash-3+4 für das Ceph. Server haben zwei Broadcom P425G wo jeweils 2 Ports für das Ceph-LACP genutzt werden...
  4. D

    Ceph Squid upgrade to 19.2.1

    Based on the 8.4 release notes and https://tracker.ceph.com/issues/70390 there is an issue with OSDs (currently the issue is affecting also older deployments). There is an advice to update ASAP, but when to expect the OSDs to fail? Before update, during the update procedure when some nodes are...
  5. S

    [SOLVED] Ceph HDD OSDs start at ~10% usage, 10% lower size

    I've noticed when creating an OSD on an HDD, it starts out showing about 10% usage, and stays about 10% higher? We are using an SSD for the DB and therefore WAL but it seems like it shouldn't count that...? Also, the size is shown lower in the OSD "details" than the OSD page in PVE. Screen...
  6. D

    No OSDs in CEPH after recreating monitor

    Hello! I'm trying to create a disaster recovery plan for our PVE cluster including CEPH. Our current config involves three monitors on our three servers. We'll be using three monitors and standard pool configuration (3 replicas). I'm trying to set up a manual for deleting monitor configuration...
  7. S

    Spinning disks reporting as type "unknown"

    In Proxmox 8.3.5 all the spinning hdd types are showing as unknown type. SMART info is reporting fine in each case. Can someone help me understand the issue that might cause this. I am concerned that in my setup this pve is one node in a 3 node ceph cluster and I am unsure what affect this might...
  8. S

    [SOLVED] Ceph MON Service Not Starting on pve2 and pve3 in a Virtualized Proxmox Cluster

    pve version: 8.3.4 ceph version: 19.2 I have only one physical network interface on my physical Proxmox server, so I don't have the opportunity to configure separate Ceph cluster and Ceph public networks. To work around this, I created virtual networks using Linux bridges on the physical...
  9. N

    [ceph and radosgw] Question about different versions and potential trouble between them

    We have been using ceph on our Proxmox (uptodate) with version 19.2.0 as in: sudo ceph --version ceph version 19.2.0 (16063ff2022298c9300e49a547a16ffda59baf13) squid (stable) But our radosgw server was running on Debian, and, we just discovered recently that the version used there was not...
  10. P

    VMs booten nicht von Ceph RDB Storage

    Hallo zusammen, ich habe ein Proxmox Cluster (v8.3.3) mit drei Nodes. Auf diesem Cluster läuft auch ein Ceph Cluster (v19.2.0). Jede Node hat eine eigene SSD, die als OSD im Cluster eingebunden ist. Und daraus habe ich einen Pool für VMs erstellt. Auf Node1 habe ich eine VM (Ubuntu 24) erstellt...
  11. D

    Ceph install failing on 8.3.3

    Good Day, I'm seeing a somewhat weird issue specifically with 8.3.3. I am trying to install ceph to convert a cluster running off of local storage to a hyper-converged cluster running on ceph. The issue I'm seeing is timeouts when performing the ceph install wizard. On the first page I'm...
  12. A

    Ceph PVE dashboard Object store zonegroup error.

    Hi I have been recently setting up a new cluster, and decided to use proxmox because it looked less daunting then spinning up a bare metal talos, rook-ceph kubernetes and kubevirt cluster. One key requirement of this setup is it must support hyperconverged object storage. With minio deciding...
  13. H

    [SOLVED] Ceph Object RGW

    Hello, I created a ceph and added a ceph dashboard, then I want to create an object bucket, but there is an error as shown below. Maybe someone can help me?
  14. S

    Issues creating CEPH EC pools using pveceph command

    I wanted to start a short thread here because I believe I may have found either a bug or a mistake in the Proxmox documentation for the pveceph command, or maybe I'm misunderstanding, and wanted to put it out there. Either way I think it may help others. I was going through the CEPH setup for...
  15. Q

    [PLEASE HELP]I can't create Ceph OSD.

    Hi, I am trying to create Ceph OSD(Squid) on a hard drive from Proxmox web GUI but it failed. Please tell me how to fix it! Here's log: https://pastebin.mozilla.org/5zwd5RVA Thank you in advance!
  16. F

    Ceph + Cloud-Init Troubleshooting

    I have been using Cloud-Init for the past six months and Ceph for the past three months. I tried to set up Cloud-Init to work with CephFS and RBD, but I am having trouble booting a basic virtual machine. Is there a post or tutorial available for this particular use case? I have searched...
  17. G

    Ceph stretch pools in squid

    Hi, Has anyone experimented with ceph stretch pools that seem to have appeared in squid? (not stretched clusters) It seems rather new, but rather interesting, as it may not require the whole cluster to be set to stretched, while still dealing with the guarantee of OSDs and monitors being on...
  18. A

    [SOLVED] Ceph says osds are not reachable

    Hello all, I have a 3-node cluster set up using the guide here: https://packetpushers.net/blog/proxmox-ceph-full-mesh-hci-cluster-w-dynamic-routing/ Everything was working fine when using Ceph Quincy and Reef. However, after updating to Squid, I now get this error in the health status...
  19. F

    Ceph Ruined My Christmas

    Merry Christmas, everyone, if that's what you're into. I have been using Ceph for a few months now and it has been a great experience. I have four Dell R740s and one R730 in the cluster, and I plan to add two C240 M4s to deploy a mini-cloud at other locations (but that's a conversation for...
  20. smueller

    [SOLVED] Migrate VM across Clusters with Ceph not possible

    If I try to migrate a VM (Online or Offline) from one Cluster to another. It will not work. Both Clusters use Ceph. But if i put the OS Disk on local-zfs and then migrate it to the other cluster it works. Error from Ceph to Ceph: 2024-12-23 12:57:56 remote: started tunnel worker...