storage capacity

  1. I

    Ceph Speicherverbrauch ist falsch

    Hallo zusammen, ich habe 3 Node Cluster mit je 3 OSD in jedem dieser Nodes. Meine Ceph Version ist: 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable). Der Pool hat Replica 3/2 mit 128pg. Sobald ich eine VM aus einem Backup, das auf einem NFS Share liegt, herstelle zeigt der...
  2. C

    [SOLVED] Ceph health warning: backfillfull

    Hello, in my cluster consisting of 4 OSD nodes there's a HDD failure. This affects currently 31 disks. Each node has 48 HDDs à 2TB connected. This results in this crushmap: root hdd_strgbox { id -17 # do not change unnecessarily id -19 class hdd # do not change...
  3. P

    Trouble with zfs not showing all available space in proxmox

    I just recently upgraded the drives in my zfs pool, and while in the shell zpool shows correct amount of available space (about 11tb 4x4tb in raidz2). Promox is only showing just under 7tb worth of available space in the pool. Is there a way to refresh this or reset it? Because now my virtual...
  4. L

    [SOLVED] Ceph missing more that 50% of storage capacity

    I have 3 nodes with 2 x 1TB HDD and 2 x 256G SSD's each. I have the following configuration: 1 SSD is used as system drive (LVM partitioned so bout a third is used for the system partition and the rest is used in 2 partitions for the 2 x HDD's WALs. The 2 x HDD are in a pool (the default...