You've given no details about the workloads that'll be running on the VMs, IOPS requirements, CPU loading, or availability expectation, so anything that's suggested here is a pure guess. But here's something for you to start with.
You'll need 3 x your storage requirements + 20% as a bare minimum (so 180TB). Using something like 4TB intel P4510 nvme drives you'd need 45 drive bays for the OSDs. 16 bay 2.5" nvme hotswap chassis are very common, so 5 x chassis with 48 drives spread over them would work and leave some headroom for growth. Hook them up with 100GbE or 40GbE for the ceph networks, and 10GbE or 1GbE for the data & cluster networks (all depends on the unknown workload you need to support) and it should be OK. If you're expecting decent IOPS for that much storage then 100GbE would be smart. If it's low IOPS then 40 GbE should work.
For N+1 you'd need 256GB RAM per node to give some headroom. CPU is unknown as you haven't given any idea of loading. You may get away with 2 x 16 cores per node but when you factor in N+1 you'd be running at a 4:1 contention ratio. That may be fine or it may suck depending on what the VMs are doing. If that sucks you'd need more sockets, so either 4 socket boxes or more boxes. How long is a piece of string .....
Or you may want to run more smaller nodes to spread the storage over more chassis. All depends on your perception of risk and what sort of availability you're expecting to deliver. Again, how long is that piece of string.