1. O

    How to setup storage with SSD and HDD

    Hello at all, I installed a Proxmox cluster with 3 cisco server nodes. Each node have 6 SSD's ( 2 x 240GB, 4 x 1TB) and 12 HDD with 2TB each. The first 2 SSDs are used as ZFS raid0 for the system itself. Now I have 4 SSDs(4 x 1TB) and 12HDDs(12x 2TB) left. My idea ist to use 2 of the HDDs in...
  2. A

    Ceph Routed Setup (with Fallback) - time outs

    Hi, I tried to configure a Ceph routed setup with fallback according to this post: Routed Setup (with Fallback). Everything seems to work and the status is ok, but `journalctl -u frr` shows a lot of time outs: Oct 03 13:52:02 host3 fabricd[2563]: [NT6J7-1RYRF] OpenFabric: Initial...
  3. G

    Clean way to disable Ceph debug logs?

    Hi, I read somewhere in the forum that disabling Ceph debug logs could improve my overall IO wait. I've been reading the official Ceph docs but it's a little unclear whether I need to set the params per OSD (as it seems) or I can set them globally, and how. I can't see any of them in the...
  4. G

    Ceph: actual used space?

    Hi, I'm running a Proxmox 7.2-7 cluster with Ceph 16.2.9 "Pacific". I can't tell the difference between Ceph > Usage and Ceph > Pools > Used (see screenshots). Can someone please explain what's the actual space used in my Ceph storage? Do you think that 90% used pool is potentially dangerous...
  5. L

    Homelab Real World Ceph Advice For VM/LXC usage

    I have a four node proxmox cluster for my homelab, tied up with a 10gb network! Essentailly 2 nodes are compute and 2 are storage focsused. All four nodes have docker and VMs but the load is focused on the 2 compute nodes. Using EXOS hd for storage, SSDs for VMs/LXCs. and seperate boot SSDs...
  6. P

    Migration from 2nodes shared storage to 3nodes CEPH

    Hi all, I would like to upgrade my pve cluster from 2 nodes + qdevice and qnap shared storage to a 3 nodes cluster with ceph - what are the requirements to have 3/4 tb of storage for the vm? - are 10gb network cards enough for ceph traffic? - are there favorite models of ssd for ceph? -...
  7. D

    Ceph scrub error reparieren während VMs laufen?

    Hallo zusammen, wir haben bei einem Kunden ein Proxmox Cluster bestehend aus 3 Servern gebaut und das läuft auch alles soweit ganz gut. Wir haben die Server mittels 10GBit/s Netzwerkkarten per Peer-to-Peer miteinander verbunden, worüber ein Ceph Speicher läuft. Dieser Ceph Speicher weist nun...
  8. A

    connecting Ceph ProxmoxVE Nodes

    Hello everyone, I need some advice in my local proxmoxVE cluster. i am actually trying to set up a high availability ProxmoxVE cluster with Ceph inside of this cluster After installing ProxmoxVE on my physical nodes, I am asking about the way to link them together. I know that ceph Nodes...
  9. I

    PVE Ceph Rules for HDD Pools of Different Sizes

    I apologize as I am sure this question has probably been answered a thousand times but I cannot find appropriate documentation and I'm still too new on my Proxmox/Ceph journey to apparently get the right google terms. I have a 3 node Proxmox Cluster with all HDD Disks and an already setup...
  10. L

    Problem with VM Disks list in Ceph pool

    Hi guys! Today I noticed a problem with displaying the list of disks. I got error rbd error: rbd: listing images failed: (2) No such file or directory (500) ceph -s: cluster: id: 77161c77-31b0-4f07-a29d-d65f7bd6e18e health: HEALTH_OK services: mon: 3 daemons, quorum...
  11. U

    Cluster can't startup after reboot all nodes

    I have a three nodes PVE cluster, and I never tried to shutdown all servers in this cluster. I changed my IDC service provider, so I have to uproot this Friday and I just executed sudo poweroff on all thress server in 5 mins . After uproot, I tried boot these three servers and access from web...
  12. A

    Ceph issue

    A few days ago, I had a VM hang on a cluster of 4 servers. 3 servers have 2 SSDs per server for CEPH, in total 6 SSDs. At the time of the problem, version 7.1 was installed. CEPH 16.2.7 was also launched. There are 3 Ethernet interfaces on each node. Two built-in motherboard (1Gb) and one 10Gb...
  13. M

    Extend LVM of Ceph DB/WAL Disk

    I have a 3 node Ceph cluster running Proxmox 7.2. Each node has 4 x HDD OSDs and the 4 OSDs share an Intel Enterprise SSD for the Ceph OSD database (DB/WAL) on each node. I am going to be adding a 5th OSD HDD to each node and also add an additional Intel Enterprise SSD on each node for use with...
  14. G

    Advice on Ceph for Homelab

    Hey guys, I'm currently running a single 10year-old 4c/8t esxi host with local consumer SATA SSD raid 5 in my homelab to host my VMs My current VMs are: 5x debian servers (teamspeak, docker host, chat server, home automation, DLNA server), 3x windows servers (2x AD DC, print server) I want to...
  15. L

    Nested pmx cluster with ceph?

    I'm creating a virtualised pxc cluster on top of a proxmox installation configured with ceph storage. We are testing some automation with terraform and ansible. Ideally I would like configure ceph in this nested configuration, however that would be ceph of top of ceph. Will that work...
  16. H

    Ceph OSD Performance is Slow ?

    Hi guys, I'm currently testing ceph in proxmox. I've followed the documentation and configured the ceph I have 3 identical nodes and configured as follows: CPU: 16 x Intel Xeon Bronze @ 1.90GHz (2 Sockets) RAM: 32 GB DDR4 2133Mhz Boot/Proxmox Disk: Patriot Burst SSD 240GB Disk: 3x HGST 10TB HDD...
  17. T

    CEPH Speed drop off after a couple of GB transfered

    I have a CEPH cluster with three nodes, the nodes are identical and configured as follows: CPU: 15 3470 @ 3.20GHz RAM: 16GB DDR3 Boot/Proxmox Disk: Samsung EVO 250GB SSD WAL/DB Disk: Silicon Power 256GB SSD OSD: Seagate 2TB HDD NIC1: 1GbE used for proxmox management NIC2: 1GbE assigned to CEPH...
  18. T

    Cannot get CEPH install to work.

    I am brand new to Proxmox. I have 3 nodes in a Proxmox cluster, each with 16gb of ram, an i5 3470, two 1GBe NIC's, one 250gb ssd for boot, one 250gb SSD for storage and one 2tb ssd for storage. The hope was to create a CEPH cluster for storage for the few VM's I will be running. The SSD as the...
  19. L

    [SOLVED] Ceph pool size and OSD data distribution

    Note: This is more of an effort to understand the system works, than to get support. I know PVE 5 is not supported anymore... I have a 7 node cluster which is complaining that: root@s1:~# ceph -s cluster: id: a6092407-216f-41ff-bccb-9bed78587ac3 health: HEALTH_WARN 1...
  20. A

    Storage replication in case of PVE failure

    Hello community! I'm a noobie with Proxmox and still learning :D I want to do my lab - I've got Proxmox cluster (two physical servers). I'd like to create enviroment as simple as possible where I will be able to migrate VMs between servers. I was thinking about this solutions - Storage...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!