ceph

  1. UdoB

    [SOLVED] FYI: do not extend Ceph with OSDs connected via USB

    Just written down for your amusement, on a lazy, dark and rainy Sunday afternoon: Someone (me) might try to extend a Ceph cluster by adding NVMe (or other SSDs) via USB3 Adapters. For a small homelab this should be feasable, isn't it? My three PVE nodes already run a single Ceph OSD on an...
  2. C

    PVE Networking stuck at 1Gb/s with 10Gb link

    Hi, I have a probleme with my proxmox cluster, I have a bond for CEPH derectly connected betwen the nodes in 10Gb and a bond for public network with 10Gb link to. But when i do an iperf with public network I have my 10Gb/s but not with my CEPH network. Before this configuration I haved...
  3. U

    Minimal Ceph Cluster (2x Compute Nodes and 1x Witness -- possible?)

    For my testing/development environment, I am looking to configure a cluster of Proxmox servers using a Ceph storage pool so I can implement HA. My goals are high availability and having the ability to bring down a host for maintenance and patching. Also, to handle the occasional fault. I...
  4. N

    Question: InterVM 10Gbe network between VMs? #rookCeph #k3s

    Hey everyone, I am running my kubernetes nodes on my Dell R730 At the moment they are using a bridged Network on a 10Gbe NIC…But I only have a 1Gbe switch.\ That said, the server has a second (unused 10Gbe Nic) eno2 I was wondering if It would be possible to create a “logical” inter-vm network...
  5. V

    [SOLVED] Use a 2*10gbit/s Adapter as a BOND/TEAM for both Network and CEPH

    Is it possible to configure two Network interfaces as a BOND to be on the same Network? I imagine it's better for performance & HA vs using only one interafce for each net. also.. Can I create disks which are bigger than one physical Disk when using ceph? like a VM which has a disk of 3TB if I...
  6. R

    Cluster Networking Configuration - Subnets

    I am putting together a 3x node cluster for my own education (before doing this for real at work!) and I have the following plan for subnets for the cluster. This setup is based on all the tutorials I've been able to find online, but I wanted to make sure I'm doing this right. I have enough...
  7. M

    [SOLVED] Proxmox local-lvm stops when 2 servers in a cluster of 3 dies?

    I have 3 servers in the cluster, each server has- 1. 2 HDD hard drive and hardware Raid 1 for local storage and os storage 2. 1 SSD for Ceph use for HA. I have 2 VMs in local-lvm(ext4) and 2 in Ceph storage. My Ceph HA is working fine, it only fails when 2 out of 3 servers die. However I...
  8. G

    [SOLVED] Ceph tmp folders filling up /tmp on local storage

    Hi community, I noticed that in the last few months, ceph is filling the /tmp partition on storage "local" more and more. Currently, there is more than 1,1T of data in there, all with ceph.XXXXXX folders containing cephfs_data subfolders. This hasn't been so "bad" before. Is that a problem with...
  9. M

    [SOLVED] What happens when one of the raided disks dies in a 3-server HA system?

    Indeed, Proxmox will not work when 2 out of 3 disks die in shared storage. My question is what happens when 2 disks in servers are RAID 1(shared storage) and one of the disks from two servers die? Will the Proxmox server use the raided disks or will it die?
  10. M

    [SOLVED] Cehp degraded and did not add back automatically.

    We had an issue and removed one out of 3 disks used for HA. The system is working fine, but when we tried to add the disk back. Ceph shows drive 'in' but it is not adding it to the pool automatically for read/write. I pressed 'start' and checked the log which shows '33.333%' degraded. Used CLI...
  11. K

    Question about poor ceph perfomance

    Good afternoon. I am experiencing problems with poor ceph performance, with dd showing good results, but integration with a VM with a disk on ceph raises doubts and problems. For example, the time echo command can be executed both 00.000 ms and 02.000ms and varies in this interval, that is, it...
  12. aPollO

    [Ceph PG Autoscaler] Pool has 128 placement groups, should have 128

    Hi, i'm on PVE 7.4-17 with Ceph Pacific (16.2.14). After the upgrade from 16.2.13 this warning appears. Yes i know Pacific ist EOL. Upgrade is already scheduled. Pool pool1 has 128 placement groups, should have 128 The autoscaler option for the pool is set to "warning". I have another pool...
  13. tuxis

    Ceph mon not joining after redeployment

    Hi, Yesterday we started an upgrade of a cluster that was installed in 2017 and has been upgraded since. We were going to upgrade Ceph from Pacific to Quincy, and commenced by redeploying the mons, because the mons were still using leveldb, which is no longer supported in Quincy...
  14. M

    How to live migrate VMs?

    I am already using HA, Ceph, 10G network, and shared storage. I noticed my VMs are restarting.
  15. H

    Reshard my ceph object

    Hello everyone, Im resharding my bucket, when I add number_shard 16 in queue, the process sharding is success, I check with command: radosgw-admin reshard status --bucket <bucket_id> [ { "reshard_status": "not-resharding", "new_bucket_instance_id": "", "num_shards": -1 } ] but I check...
  16. A

    [SOLVED] Ceph OSD latency on one node

    Hi, We are using a three-node hyperconverged cluster. We are experiencing latency on only one of our nodes. On two nodes, latency varies between 1 and 5. On the last node it can be up to 300. PG autoscale is On and the Crush Rule : replicated_rule. The nodes are strictly identical: Dell...
  17. E

    Proxmox VE CEPH configuration help needed!

    I will preface this by saying I am a total ProxMox/CEHP noob... I am good with Linux and I have been a VMware Admin/Engineer for the better part of 2 decades... I have looked through dozens of walk-thoughts and videos but I am still struggling with CEPH and I need some help. Goal: HA...
  18. T

    Proxmox and Ceph with NFS share for Docker swarm

    Hello All, I need to create platform with some dockers with critical data and service , The platform i have is 3 servers with 6 x 2tb ssd and have 4x10Gbit connection each server My thought was creating hyper converged Proxmox with Ceph and create small size vm and mound the docker data on...
  19. M

    Recommends for storage type to Synology NAS

    We have PVE running, with VMs and Containers running over NFS share to our Synology. We mostly leveraged VMs, so at the time I hadn't realized that Snapshots on NFS for containers would be an issue. Now I am starting to use LXC more often and would like to be able to have snapshots. The options...
  20. S

    Cluster

    So I have 3 servers that are all slightly different hardware if that matters. I installed them all with the same installation ISO. Run updates on each server. Joined all 3 servers to the cluster. However, server02 is facing some kind of issue. It shows as being part of the cluster (local or...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!