Search results

  1. A

    bluestore_default_buffered_write = true

    Has anyone ever tried using this feature? I've added it to the [global] section of the ceph.conf on my POC cluster but I'm not sure how to tell if it's actually working. TIA
  2. A

    Latest Ceph release 14.2.6

    I had heard there were issues with 14.2.5 so I've been waiting for a new release. I see Nautilus 14.2.6 has been released. Will there be a corresponding release from Proxmox? TIA
  3. A

    After updating nodes - Use of uninitialized value $val in pattern match

    FYI - Last night I applied the latest updates from the no-cost repository. Now I get the following when I bulk migrate VMs from node to node. Check VM 309: precondition check passed Migrating VM 309 Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm...
  4. A

    OSD keeps going down and out

    I have an OSD which keeps toggling to down and out. Here's what I'm seeing in the syslog. Any clue here why this would be happening? Sep 23 02:22:26 SeaC01N02 kernel: [533115.376053] sd 0:0:16:0: [sdo] tag#262 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Sep 23 02:22:26 SeaC01N02...
  5. A

    CEPH disk usage

    I've installed Proxmox/CEPH 6.0 on 5 Dell R740XD servers. Each has (16) 8TB spinners and 1 Supermicro NVMe card with a 4TB Samsung NVMe drive. During the OSD creation I allocated 1/16 of the NVMe to the DB of each 8TB OSD. After creating the 80 OSDs I see a CEPH usage of 3%, 17.5TB of 599TB...