Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

We also did 15 upgrades from ceph 19 to 20 in nested pve-environments (all clean installs from training) all went fine without issues. The only thing which was not 100% clear howto check what mds-services are standby via cli:
Take all standby MDS daemons offline on the appropriate hosts with:

systemctl stop ceph-mds.target
We looked at the webui to find out. Adding a cli command might be also useful / as people are already on cli. Rest went smooth, my error from above did not happen in all those clean installs.
 
Last edited:
Just upgraded myself. Went just fine no issues. 3 OSDs. I got some interesting data after upgrading:

Ceph Squid → Tentacle Upgrade Benchmark Summary

Cluster:
3-node Proxmox (Intel NUC14, NVMe)
Pool: ceph-vms (replicated, size 2 / min_size 2)
Upgrade Date: 2026-01-16

RBD Bench (from Proxmox host)

TestPre-Upgrade (Squid 19.2.3)Post-Upgrade (Tentacle 20.2)Change
4K Random IOPS60,20768,516+13.8%
4K Random BW235 MB/s268 MB/s+14.0%
64K Seq IOPS7,48810,120+35.1%
64K Seq BW468 MB/s632 MB/s+35.0%


Test script:

Bash:
#!/bin/bash
# Save as: ceph-bench.sh

POOL="ceph-vms"
IMAGE="bench-test-$(date +%s)"
SIZE="10G"

echo "=== Ceph RBD Benchmark ==="
echo "Date: $(date)"
echo "Pool: $POOL"
ceph -s | grep -E "health|mon:|osd:"
echo ""

# Create test image
rbd create $IMAGE --size $SIZE --pool $POOL

echo "--- 4K Random Read/Write (VM typical workload) ---"
rbd bench --io-type rw --io-size 4K --io-threads 16 \
    --io-total 1G --io-pattern rand $POOL/$IMAGE

echo ""
echo "--- 64K Sequential Read/Write (file transfers) ---"
rbd bench --io-type rw --io-size 64K --io-threads 8 \
    --io-total 2G --io-pattern seq $POOL/$IMAGE

# Cleanup
rbd rm $POOL/$IMAGE

echo ""
echo "=== Benchmark Complete ==="
Is this with crimson? If not could we get those? And also an EC test?
 
Seems like tentacle has landed in no-subscription now (noticed it today) :-)
 
  • Like
Reactions: wbedard and Veidit
Its visible when you install a new proxmox node with ceph, there you can select it already.
 
  • Like
Reactions: Heracleos
Its visible when you install a new proxmox node with ceph, there you can select it already.
Okay, understood. In my case, since it's an “old” installation, I have to manually modify the repositories in /etc/apt.
For those who have already upgraded, have you experienced any particular problems? Is it fairly reliable?
Thanks
 
You can't yet, SMB module isn't packaged with Proxmox's Ceph Tentacle but Ceph Dashboard package expects it.
Is the SMB mgr module not planned to be shipped in the Proxmox Ceph package, or is it planned? I’m spinning up a Proxmox host soon, and I’d like to test Tentacle, but the combined lack of the dashboard (due to the dependency issue) and the lack of the SMB mgr module (which is important to my macOS clients) make it a tough decision.
 
You dont actually need the module- all it does is integrate what you always could do with smbd. Its a nice to have, not a showstopper.
It is a showstopper for the dashboard, which has it as a hard requirement. (Perhaps that’s a bug on Ceph’s part, but right now that’s what we have to deal with downstream.) Also, yes, I could spin up a separate smbd, but it would be nice to have the option of using a headlining feature of Tentacle.
 
It is a showstopper for the dashboard, which has it as a hard requirement.
I didnt know that, but that kinda begs the question what does the dashboard offer you beyond what PVE presents; if it really something necessary, I'd probably just set up ceph with cephadm seperate from pve. PVE doesnt consider the entirety of the ceph stack necessary for full function since it doesnt need all of it.
 
  • Like
Reactions: UdoB
if it really something necessary, I'd probably just set up ceph with cephadm seperate from pve. PVE doesnt consider the entirety of the ceph stack necessary for full function since it doesnt need all of it.
I didn’t think that opting for the hyper-converged implementation of Ceph in PVE would require giving up basic functionality of Ceph (the dashboard and the SMB mgr module).

And it’s not clear if this is intentional or not, which is why I asked if it was. @t.lamprecht can you comment (as the staff member who announced this release)?
 
if it really something necessary, I'd probably just set up ceph with cephadm seperate from pve. PVE doesnt consider the entirety of the ceph stack necessary for full function since it doesnt need all of it.
I didn’t think that opting for the hyper-converged implementation of Ceph in PVE would require giving up basic functionality of Ceph (the dashboard and the SMB mgr module).

And it’s not clear if this is intentional or not, which is why I asked if it was.