ceph

  1. F

    Best Utilization for network ports

    I am building out a 3 node cluster for home lab use. Each node has (2) 2.5 gb nics which currently plug into 1 gb ports on my switch, (2) SFP+ ports, and (2) 40 gb ports. I currently have the 40 gb configured in a mesh network. I would like to separate everything out to avoid network...
  2. I

    I get it. I get it. 3 Nodes for Ceph. But what about ZFS shared?

    I have a pretty basic need, for a small business I own. Basically one VM that I need to run, and in essence I want HA, in the sense that if the node fails, it's taken over by another proxmox server. I'd prefer it to used shared storage ZFS, and I have this running in lab right now. I've stayed...
  3. D

    Upgrade from version 7.4-3 to 8.2-4 issues.

    Hello everyone, During the migration of one of my Proxmox nodes from version 7.4-3 to version 8.2-4, when booting with the kernel 6.8.12-1 that comes with this latest version, the following error appears: libceph: mon1 (1) 192.168.169.20:6789 socket closed ((con state V1_BANNER)) libceph: mon5...
  4. A

    VM migration during ceph remapping

    Hello! I wanted to ask a quick question because i'm not totally sure at the moment and don't want to risk any of my VMs. I was forced to replace a node in my cluster urgently which worked perfectly fine. My ceph is currently in the state of remapping+refilling because of the newly added OSDs...
  5. A

    Does the Dell HBA465i controller work to set up a CEPH OSD

    With a view to buying a Dell server with a Boss-n1 card in raid one for the system part (Proxmox. I don't think there is a problem on that side), I would like to know if the HBA465i backplane that Dell offers me in its quote, will allow me to see the SSD disks under Proxmox to set up a CEPH OSD
  6. M

    link: host: 3 link: 0 is down

    I was looking for many other similar questions but could not find any exact answers. In my case I have three server's ha network on 65 subnets and they are connected to one another using 10GB wire and ports. 1 VM is running on each node and is part of a cluster. I tried to tune my pacemaker...
  7. K

    [8.2.4] [bug] service ceph-mon is not working properly

    tl:dr changing %i to corresponding name make service mon working. One of my mons keeps dying, restarting and cannot start again, so I investigate it. It cannot start due to misconfiguration in /etc/systemd/system/ceph-mon.target.wants/ceph-mon@pve2.service file at "%i" variable, which points to...
  8. K

    Windows server 2022 poor random read and write performance

    Hello, I am looking for a way to improve the performance of random reads and writes on a virtual machine with windows server 2022. VM configuration: agent: 1 boot: order=virtio0;ide2;net0;ide0 cores: 6 cpu: qemu64 machine: pc-i440fx-9.0 memory: 16384 meta: creation-qemu=9.0.0,ctime=1724249118...
  9. D

    Limited hardware Proxmox setup. Need advice.

    Hello, I have a handful of baremetal servers that I would like to migrate to proxmox vms. As I move the data off the servers, I can reuse hardware to add to the proxmox cluster. Each server is currently configured in a raid 5 with 20 ish Terabytes available. All storage is using HDDs and there...
  10. S

    [Solved] Recovering CEPH and PVE From Wiped Cluster

    The Headline: I have managed to kick all 3 of my nodes from the cluster and wipe all configuration for both PVE and CEPH. This is bad. I have configuration backups, I just don't know how to use them. The longer story: Prior to this mishap, I had Proxmox installed on mirrored ZFS HDDs. I...
  11. L

    problems with KINGSTON_SFYRD4000G disks in ceph cluster

    Hello Community. Does anyone have KINGSTON SFYRD 4000G drives in ceph cluster? We have built a cluster on them and are seeing very high latency at low load. There are no network or CPU issues. Ceph version is 17.2.7, cluster is built on LACP inter 25G network cards, Dell R450 servers, 256Gb ram...
  12. J

    Proxmox Ceph Problem

    Hello, For the past two weeks, I've been encountering an issue where I can no longer clone or move a disk to Ceph storage. Here’s the cloning output: create full clone of drive scsi0 (Ceph-VM-Pool:vm-120-disk-0) transferred 0.0 B of 32.0 GiB (0.00%) qemu-img: Could not open...
  13. K

    RBD with custom object size

    Hello, I need to use RBDs with custom object size different from the default 25 (4MB). While it is possible to create it via command prompt: rbd -p poolName create vm-297-disk-1 --size 16G --object-size 16K i don't know how to import it to make it available in LXC in some mount point?
  14. R

    Tried to add VLANs, lost entire cluster, how to do it right?

    In the process of putting together a plea for help as to how to get my cluster back together (copies of /etc/network/interfaces, /etc/hosts, /etc/corosync/corosync.conf for each of my 3 nodes) I found the mismatches and remembered to increment the config version up one. Now corosync is back...
  15. M

    CEPH Reweight

    Hello everyone! I have a question regarding CEPH on PROXMOX. I have a CEPH cluster in production and would like to rebalance my OSDs since some of them are reaching 90% usage. My pool was manually set to 512 PGs with the PG Autoscale option OFF, and now I've changed it to PG Autoscale ON. I...
  16. R

    Dell VxRail converted to ProxMox+Ceph?

    Hi has anyone taken a Dell VxRail (VMware and vSAN) and wiped it, and reprovisioned it as ProxMox with Ceph? we have a couple of clusters, with Dell servers connected together via Dell 10Gb/100Gb switches - can these be reused or are they restricted in their BIOS etc? just working out if we...
  17. D

    Ceph Database Deployment Geo-distribution & Latency Question

    I love Proxmox's ability to make it super easy to set up a three-node HA cluster with Ceph. I like to use it for my VMs and SQL databases that require a lot of IOPS. This way, if one of the nodes goes down, my SQL DB VM can be quickly redeployed to another node. That way, if one of the nodes...
  18. K

    Using PBS as 4th Ceph node in 3-node Proxmox cluster

    Good afternoon. In my homelab I want to make a 3-node Proxmox cluster with Ceph. I also want to add a 4th separate host with PBS for backups. Each node in the Proxmox cluster will have an SSD for a 3/2 replicated Ceph pool for VM/CT disks. I also want to add a spinning HDD to each node for...
  19. G

    Need advise for first serious proxmox ceph evaluation lab

    Proxmox Lab Setup - need advise Some old R730 are hanging around in our office and I would like to make a proxmox ceph cluster lab built under production conditions. I have following at disposal: 4x R730 8 core Xeon 96GB DDR4 each with 8 SFF slots for disks HBA330 sas controller 6x 480GB...
  20. F

    New cluster build, taking suggestions and recommendations

    Building a three (3) node cluster. Each node is: Dell R620 Dual E5-2690 v2 @ 3.00GHz 768GB ECC RAM 2x 10Gb Ethernet 2x 1Gb Ethernet Dell H310 flashed to IT Mode - 8x Crucial MX500 4TB SSD 3x StarTech Dual M.2 PCIe SSD Adapter Card - 6x Crucial P3 4TB NVME SSD 2x SanDisk 128GB USB SSD -...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!