Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

We also did 15 upgrades from ceph 19 to 20 in nested pve-environments (all clean installs from training) all went fine without issues. The only thing which was not 100% clear howto check what mds-services are standby via cli:
Take all standby MDS daemons offline on the appropriate hosts with:

systemctl stop ceph-mds.target
We looked at the webui to find out. Adding a cli command might be also useful / as people are already on cli. Rest went smooth, my error from above did not happen in all those clean installs.
 
Last edited:
Just upgraded myself. Went just fine no issues. 3 OSDs. I got some interesting data after upgrading:

Ceph Squid → Tentacle Upgrade Benchmark Summary

Cluster:
3-node Proxmox (Intel NUC14, NVMe)
Pool: ceph-vms (replicated, size 2 / min_size 2)
Upgrade Date: 2026-01-16

RBD Bench (from Proxmox host)

TestPre-Upgrade (Squid 19.2.3)Post-Upgrade (Tentacle 20.2)Change
4K Random IOPS60,20768,516+13.8%
4K Random BW235 MB/s268 MB/s+14.0%
64K Seq IOPS7,48810,120+35.1%
64K Seq BW468 MB/s632 MB/s+35.0%


Test script:

Bash:
#!/bin/bash
# Save as: ceph-bench.sh

POOL="ceph-vms"
IMAGE="bench-test-$(date +%s)"
SIZE="10G"

echo "=== Ceph RBD Benchmark ==="
echo "Date: $(date)"
echo "Pool: $POOL"
ceph -s | grep -E "health|mon:|osd:"
echo ""

# Create test image
rbd create $IMAGE --size $SIZE --pool $POOL

echo "--- 4K Random Read/Write (VM typical workload) ---"
rbd bench --io-type rw --io-size 4K --io-threads 16 \
    --io-total 1G --io-pattern rand $POOL/$IMAGE

echo ""
echo "--- 64K Sequential Read/Write (file transfers) ---"
rbd bench --io-type rw --io-size 64K --io-threads 8 \
    --io-total 2G --io-pattern seq $POOL/$IMAGE

# Cleanup
rbd rm $POOL/$IMAGE

echo ""
echo "=== Benchmark Complete ==="
Is this with crimson? If not could we get those? And also an EC test?