PVE-ceph versus regular ceph does not apply in this discussion.
One of the most important best practices you might want to consider is that your separation of "storage" network from your production VM network is not sufficient. Your storage network must (should be seriously considered to) be divided further into 2 more networks: First there will be the "public" network for mon, mgr, and RBD clients (the disks mounted in your VMs), and the second network is dedicated to OSD replication, recovery, and balancing, and it is best for each network to have dedicated physical pairs of links, so you would be running 4x 25 GbE in this example.
While we are in the neighborhood of best practices, many people will also create yet another dedicated link for an additional corosync ring.
You may not always see high utilization on your "storage network" but the moment you are in a degraded or misplaced state, your RBD performance will be reduced at the behest of your CRUSH rule while system works to satisfy the rule, no matter the controversy about the particulars there may have been, earlier, all i/o comes at some cost.
In the ceph world, hard disk, SSD, SAS SSD are not necessarily device classes. As far as I know you are limited to 3 choices: (literally hdd, ssd, nvme) in the PVE GUI. While I believe you can create custom device classes, the nature of this discussion so far leads me to doubt that it has been done in your setup.
So I'm not convinced you are getting what we are asking with respect to the pool-rule-class assignment relationship. Each pool has a crush rule and each crush rule will OPTIONALLY detail the device class permitted to be used.
What we want to know is if you have crush rules with device class restrictions such that the addition of a new class will determine how data is distributed as the cluster expands, or perhaps not at all.
Just run these commands and post the results.
cat /etc/pve/ceph.conf
ceph osd df tree
ceph osd crush rule dump