Search results

  1. gurubert

    Fiberchannel Shared Storage Support

    Thanks for the mention. We only see OCFS2 as a transition technology for the case when the hardware is already there. OCFS2 has its issues. We would always recommend a supported storage setup for new infrastructure.
  2. gurubert

    16 PCI Device limitation

    Why don't you run TrueNAS directly on the Hardware? What is the benefit of virtualization when you need to pass 21 drives into the VM?
  3. gurubert

    Classify Cluster Node Roles?

    You could assign the storage for the VMs only to the nodes where they should run. No storage, no images, no VMs.
  4. gurubert

    ceph after upgrade to 18.2.6 - observed slow operation indications in BlueStore

    A hotfix release 18.2.7 is currently being prepared by the Ceph project.
  5. gurubert

    [SOLVED] Proxmox on SATA & Ceph on NVMe - does it make sense?

    Do not run Ceph on 1G network equipment. You will be disappointed.
  6. gurubert

    Proxmox 9 Kernel and Ubuntu 25.04?

    proxmox-kernel-6.14 ist already available for Proxmox 8.4
  7. gurubert

    Ceph Storage (18.2.4) bei 80%

    Das sieht doch gut ausbalanciert aus. Eine Empfehlung noch: Verdoppele die Anzahl der PGs für den Pool IKT-Labor. Momentan sind es weniger als 70 PGs pro OSD. SSDs können gerne 200 - 400 PGs pro OSD halten. Dann kann der Algorithmus die Daten noch besser verteilen, weil die einzelnen PGs nicht...
  8. gurubert

    Ceph Storage (18.2.4) bei 80%

    "ceph osd df tree" wäre interessant und "ceph df".
  9. gurubert

    [SOLVED] Add WAL/DB to CEPH OSD after installation.

    Thos cannot be avoided. If Ceph needs more space for the RocksDB it takes it from the block device.
  10. gurubert

    [SOLVED] Migrating Proxmox HA-Cluster with Ceph to new IP Subnet

    You cannot just change the IPs in the ceph.conf file. You need to deploy new MONs first that listen to the new IP addresses. These addresses are recorded in the mon map as you already have seen. The IPs in ceph.conf are just for all the other processes talking to the Ceph cluster.
  11. gurubert

    Assistance with proxmox Ceph-reef or quincy install.

    For 4. you need OCFS2. It can be used by Proxmax as shared directory storage like NFS. With qcow2 for the VM images you get thin provisioning and snapshots. But this setup is not officially supported by the company that makes Proxmox. You have to configure OCFS2 yourself.
  12. gurubert

    Assistance with proxmox Ceph-reef or quincy install.

    With a SAN and LUNs presented to each Proxmox host you do not use Ceph as the SAN already has its internal replication (RAID). You can use LVM on top of the LUN as written in the Proxmox documentation. Or if you want to have something similar to VMFS you can use a cluster filesystem like OCFS2...
  13. gurubert

    Assistance with proxmox Ceph-reef or quincy install.

    You have a LUN from a SAN presented to all Proxmox nodes via FC or iSCSI? This is something that cannot be used with Ceph. Ceph uses local disks and replicates over the network. With a LUN you can use LVM as storage for the VMs.
  14. gurubert

    [SOLVED] Ceph HDD OSDs start at ~10% usage, 10% lower size

    This is normal. Ceph adds the size of the RocksDB device to the total size of the OSD. But since the RocksDB device cannot store any data it is computed as completely used. This is why you see a 10% usage on your fresh OSD. BTW: 80GB RocksDB device seems a little bit large for a 750GB HDD. And...
  15. gurubert

    Proxmox CephFS Permission Denied

    To create client keys for mounting CephFS use the command "ceph fs authorize". It will create the necessary capabilities for the key.
  16. gurubert

    Adding a Second Public Network to Proxmox VE with Ceph Cluster

    This will not work. The public network is called public for a reason: each client that wants to access the Ceph storage cluster needs to be able to talk to every node in each public network.
  17. gurubert

    Ceph - feasible for Clustered MSSQL?

    If you want to run MSSQL in a clustered setup use local storage for its nodes. There is no need to replicate on the storage level if the DB replicates its data on the application level.
  18. gurubert

    Does Proxmox plan to support SCSI over FC?

    You need to create OCFS2 on the shared LUN including an OCFS2 cluster on all nodes. Read the OCFS2 documentation for details. The OCFS2 filesystem is then mounted at the same mountpoint on all Proxmox nodes. In Proxmox you can then create a new storage of type directory with the shared flag set...
  19. gurubert

    Does Proxmox plan to support SCSI over FC?

    We are running such a setup with OCFS2 on the shared LUNs and qcow2 images in the filesystem for several years now. It works but is unsupported by Proxmox.