Joined just to add, for those wondering why this didn't auto clear, because an automatic clear on something that says "something crashed, multiple times" in a storage context is a really bad idea. Corruption may have occurred, network paths might have been broken causing crash, quorum might have been lost, many of these things you would only discover when a component actually fails. The system is basically saying "Hey, something happened that I can't figure out if it was something you did or I did, can you check for me and if it's all good and you know why then archive these".
Mon daemon in particular seems to crash a bit during simultaneous reboots of multiple nodes + bonded ethernet and spanning tree. There's nothing implicitly wrong with that, it's transient, it recovers but ceph doesn't know that.
Remember, ceph is built for very large storage deployments where random crashes can be a tell tale sign of broader issues that don't show up immediately. If you absolutely don't care about the integrity of your storage you could always setup a nightly cron of ceph crash archive-all but don't come crying when your data goes MIA because you had multiple failures that you didn't see and then your cluster went splat.