problem with wrong created ceph mon

Hello telvenes! You can press the Destroy button on the image you attached above. Alternatively, you can do the same via the CLI using pveceph mon destroy.
 
It seems that the monitor is in a broken state and needs some manual cleanup.

You already have 3 monitors, so you won't have any issues with quorum. But for other people reading this, I still want to mention:
At least three Monitors are needed for quorum.

Could you please try the following:
  1. Execute systemctl stop ceph-mon@mon.pve02.service
  2. Execute ceph mon remove mon.pve02
  3. Remove the monitor section (in your case [mon.mon.pve02]) from /etc/pve/ceph.conf
For the sake of completeness, this should not be necessary in your case, but in case you're also trying to completely remove the monitor from an existing IP (in other words, in case your monitor is broken and you're trying to add a new one), you will also need to also remove it from the mon_host list in the [global] section.

Depending on how broken the state is, there are other things you can also do - see Ceph documentation. But I think the steps above should be enough in your case.
 
It is strange, there is no refrences to it trough /etc/ceph/ceph.conf or /etc/pve/ceph.conf

Code:
  cluster:
    id:     id
    health: HEALTH_WARN
            1 pgs not deep-scrubbed in time

  services:
    mon: 3 daemons, quorum pve01,pve03,pve02 (age 22h)
    mgr: pve03(active, since 3w), standbys: pve01, pve02
    mds: 1/1 daemons up, 2 standby
    osd: 23 osds: 23 up (since 2d), 23 in (since 4d); 7 remapped pgs

  data:
    volumes: 1/1 healthy
    pools:   13 pools, 913 pgs
    objects: 6.48M objects, 24 TiB
    usage:   72 TiB used, 45 TiB / 117 TiB avail
    pgs:     155533/19441815 objects misplaced (0.800%)
             886 active+clean
             12  active+clean+scrubbing+deep
             8   active+clean+scrubbing
             6   active+remapped+backfill_wait
             1   active+remapped+backfilling

  io:
    client:   8.0 MiB/s rd, 10 MiB/s wr, 197 op/s rd, 249 op/s wr
    recovery: 14 MiB/s, 3 objects/s

  progress:
    Global Recovery Event (4d)
      [===========================.] (remaining: 50m)


And:

Code:
root@pve01:~# ceph mon remove mon.pve02
mon.mon.pve02 does not exist or has already been removed

And:

Code:
root@pve01:~# ceph mon dump
epoch 10
fsid id
last_changed 2025-03-07T11:23:42.918876+0100
created 2024-07-29T21:27:40.937217+0200
min_mon_release 19 (squid)
election_strategy: 1
0: [v2:10.15.15.21:3300/0,v1:10.15.15.21:6789/0] mon.pve01
1: [v2:10.15.15.23:3300/0,v1:10.15.15.23:6789/0] mon.pve03
2: [v2:10.15.15.22:3300/0,v1:10.15.15.22:6789/0] mon.pve02
dumped monmap epoch 10