ceph

  1. Shared ceph storage on a 5 node setup

    Hi everyone, Currently we are using SolusVM with 5 nodes each with: * 64GB of RAM * 4x 1TB SSD RAID-10 * 2x CPUs We're evaluating moving to Proxmox and use Ceph, as for the long-run it is future-proof, more scalable and easier to maintain than SolusVM. Also we're having problems with...
  2. cluster network and storage network

    hi everyone! We are migrating our server to a different cloud provider. While reading a documentation, i read this: "Storage communication should never be on the same network as corosync!". Our server must have HA and data redundancy/ data high availability (using ceph). The problem is, our...
  3. [SOLVED] probably - ceph performance graph weirdness

    Have any other ceph users noticed weirdness with performance graph. Where one read or write does not seem to reflect real situation? Mine currently shows this and I think that it's a bit off... Specifically looking at Reads... for +-50 VMs this is weird. One thing to say, that it was after...
  4. extending proxmox cluster and moving to ceph

    We have small cluster based on 10 servers and 2 storages We are planning to add 8 node supermicro (https://www.supermicro.com/en/products/system/4U/F618/SYS-F618R2-RTN_.cfm) it will act as ceph server with 2 major pools: pool fast based on pcievnme for VM,sqlDB, (based on 4tb 2.5ssds )...
  5. Ceph work slow

    Hello. The disks in our PROXMOX cluster are slow. Virtual machines placed not on SSD pool drive work with the disk slowly and spend a lot of time on disk I/O. Virtual machines placed not on HDD pool drive work with the disk wery slowly Help fix this flaw I built ceph with 3 node We use SSD...
  6. PVE nested virtualization

    Hi all. Im very new to proxmox but i need to simulate a cluster in my physical [proxmox] server. Reasons for doing this simulation/test are : test HA, test live migration, test ceph storage, 4 nodes HA amongst others. Here are my objectives (feel free to tell me anything i missed): a...
  7. Network File Sharing; permissions/UID/GID

    Hi everyone, I've run into a particular issue. I have a ClearOS VM in Proxmox acting as a domain controller with roaming profiles for some Windows PCs. I have a 3TB disk in the Proxmox machine that I'd like to share to the ClearOS VM and other VMs in the future. At the moment I'm exporting the...
  8. Scaling beyond single server. Suggestion wanted.

    Hello, I have been running 1U Xeon e5 2620 v4 CPU with 4 Sata SSD(Adata SSD 1tb SSD, very slow for ZFS:( ) configured on ZFS mirrored stripe. It run well for me but it doesn't have enough IO for my VM needs. So I recently we purchased Amd Epyic 7351p with 8 NVME ssd (Intel P4510 1TB) to solve...
  9. Proxmox with Ceph - Replication ?

    So has anyone used ceph replication along with proxmox ? Or has anyone been able to make a setup with a proxmox and ceph , and have it replicate to some other cluster ?!
  10. Ceph jewel to luminous upgrade

    We are planning to upgrade the ceph cluster from jewel to luminous using this guide https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous Is there any rollback plan possible for this upgrade?
  11. upgrading from 4.4-24 with ceph to 5.xx

    Hi, I am aware of this: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 - We have three (3) identical nodes: 256Gb of RAM, 4Tb of HD, ... same on each node - Each node is running proxmox 4.4-24 with CEPH enabled - We do not have any shared storage, all VMs are on nodes' hard drives Could...
  12. No OSD showed in Gui

    Since PvE6 there is no OSD showed in the Gui when we have racks configured in Ceph. Is this an expected behavior? When I remove the rack I´ll see the OSDs in the Gui again.
  13. Sven Jörns

    dd in ein ceph container?

    Moin, ich versuche gerade eine VMware-Appliance in ein Proxmox System zu konvertieren. In der Anleitung https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Prepare_the_disk_file steht aber nicht, wie das bei einem Ceph Cluster geht. Leider ist die Raw Datei zu groß, um sie noch im...
  14. [SOLVED] MDS fails to start: unable to find a keyring on /var/lib/ceph/mds/ceph-admin/keyring

    Hi, I cannot start MDS services on active/standby node: root@ld3955:/var/log# systemctl status ceph-mds@ld3955 ● ceph-mds@ld3955.service - Ceph metadata server daemon Loaded: loaded (/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: enabled) Drop-In...
  15. [SOLVED] Kubernetes - Ceph storage not mounting

    Hello guys, I am trying to use a persistent volume claim dynamically after defining a storage class to use Ceph Storage on a Proxmox VE 6.0-4 one node cluster. The persistent volume gets created successfully on ceph storage, but pods are unable to mount it. It throws below error. I am not sure...
  16. [SOLVED] Howto define OSD weight in Crush map

    Hi, after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size. Example: ceph osd crush set osd.<id> <weight> root=default host=<hostname> Question: How is the weight defined depending on disk size? Which algorithm can be...
  17. Unterstützung für unser CEPH-Cluster gesucht

    Hallo zusammen, aktuellen suchen wir jemanden, der für uns die Verwaltung und Überwachung von unserem CEPH-Cluster übernehmen kann. Dieses umfasst bei uns derzeit fünf Nodes mit Dual E5-CPUs und zwischen 256 und 512 GB RAM pro Node. Alle Nodes sind mit 2x 10 Gbit intern an unterschiedliche...
  18. Howto add DB device to BlueFS

    Hi, I have created OSD on HDD w/o putting DB on faster drive. In order to improve performance I have now a single SSD drive with 3.8TB. Questions: How can I add DB device for every single OSD to this new SSD drive? Which parameter in ceph.conf defines the size for the DB? Can you confirm that...
  19. [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    Hi, I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster. On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting. Typically the content of this directory is this: root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/ insgesamt 60...
  20. Anyone needing help on a hyperconvergence (ceph) project? / Alguém precisando de ajuda em um projeto de hiperconvergência?

    Hello! It's great to have a complete open source, cost-free hyperconvergence alternative, especially if you look at the values of VMware (Vsan) and Nutanix. I already set up an experimental setup with Proxmox and Ceph in a test lab, including posting a video about my experience...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!