As you already mentioned, adding OSDs with different capacity will cause more data to be stored on them, which will lead to performance degradation as they are hit more often by IO.
The other thing you should consider, especially in a 3-node cluster setup is the following.
Adding the new disks to the current cluster without making a new pool on them, you end up with 2 OSDs / node AFAIU.
What happens if not a full node, but only one OSD in one node fails?
Since you still have all 3 nodes in your cluster, Ceph will try to get back to 3 replicas (assuming size/min_size of 3/2). The remaining OSD will most likely get quite full, too full, unless your cluster is very empty. Running out of disk space is one of the few things you want to avoid at all costs with Ceph.
This is less of a problem with larger clusters, as Ceph can distribute that lost data among more nodes and still adhere to the "once replica per node" rule or if you have more OSDs per node in a 3-node cluster.
For this and the performance reasons, I suggest you create a new device class when you add those new OSDs. You can just enter a new name in the field in the GUI. Then create 2 rules, matching those 2 device classes and assign them to your pools.
The Ceph docs have a section on how to create such a rule:
https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes
For example:
ceph osd crush rule create-replicated replicated_<my dev class> default host <my dev class>