You don't want to add RBD storage on the cephfs_data pool. Ignore that suggestion. Your VMs should be in a separate pool for manageability reasons. It's easy to add a pool under the GUI if you don't already have a pool labeled vm. The reason to put it under a separate pool is so that later on if you wish to change the CRUSH rules for your VMs verses your CephFS storage you can do so. For example, due to the layout of my cluster I use size 3 min_size 3 for my VMs with the primary OSD and at least one other OSD on SSD storage but for my CephFS data which is just my backups I use size 2 min_size 2 and weight that storage towards my HDDs instead of SSDs. There's a reason why you can have multiple pools, don't just throw everything in one just because it's 2 seconds quicker.
As to sizing your PG count, Google it, there are plenty of easy to use calculators out there for picking a good size. One thing to watch out for, if you pick too small of a PG count that's just going to lead to the cluster being a bit unbalanced and filling up some OSDs faster than others. If it's too large that'll put more load on the OSDs and use more resources for less throughput. If it's too small it's easy to increase the number of PGs, if it's too large you can't combine them again without exporting all of your data out of the pool, deleting it, and recreating it with the new size and adding all of your data back in. It's a massive pain. Ceph Nautilus fixes this but Ceph Mimic and above do not support Debian (and by extension ProxMox) until Debian Buster is released. Once Buster is released, ProxMox will work on upgrading ProxMox to be based around Buster and eventually we'll be able to upgrade Ceph. The point here is that even with Debian Buster around the corner it's still going to be a little while before we can upgrade so make sure you don't have too many PGs, it's much better to have too few PGs than too many.