The problem is the write amplification. I've got for example 3 VMs running doing mainly sync writes. Combined they only write about 1MB/s inside of the VMs to the virtual ext4 filesystem but because I use zfs on the host, which uses copy-on-write, journaling, caching sync writes on disk, parity and so on around 10MB/s is written to the SSDs to store those 1MB/s. And if you need to write 10MB/s of data to the ssd, it will not write 10MB/s it will write much more to the flash to store this data because it got write amplification again. My enterprise SSDs will write 1.8GB for every 1GB I send to them for writing. So the 1MB/s from the VMs will write 18MB/s to the NAND flash of the SSD. 18MB/s is 568TB per year (while idleing) what would kill a consumer SSD like the Samsung Evo 970 M.2 1TB in around 1 year. And with a consumer SSD the write amplification should be much higher, because my enterprise SSDs are build for a low write amplification, so it would only last some months or weeks.
I switched to used Intel S3710 Enterprise SSDs. You get a used (99% life left) 200GB version for around 30€ and 5 of them can handle 18 petabyte of writing before failing and not just 0.6 petabyte like the Samsung Evo 970 M.2 1TB.
The consumer SSDs are really bad if they need to write small files or many small sync writes (like a database does). Not unusual that they will write 20MB to store a file change of 1KB if they can't cache because a sync write is needed. If this happens once a second (like a database does) this will kill the SSD really fast. Good Enterpise SSDs are using way more durable SLC/MLC flash instead of the TLC/QLC flash used in consumer SSDs. And the Intel S3710 200GB SSD is in reality using 360,8 GB MLC flash so there is 80% more flash you can't see to increase the lifetime of the SSD. Consumer SSDs may only use 10-20% more flash as spare flash. And Enterpise SSDs got a capacitor for powerloss protection so they can store data in the RAM cache even if the power supply fails, so they can use caching even if you use sync writes what will optimize write amplification. Consumer SSDs can't cache sync writes because all data in the RAM would be lost if the power supply fails.
You can try your SSDs, maybe you don't got much sync writes and they will work. But you really should run smartctl once a week and write down how much data the SSDs has really written and compare that to the next week and then make a prediction when the TBW of your SSD will be exceeded.
But be aware, Proxmox itself will write once a minute some small files for HA to the disc. It is just some KB of data but if your SSDs needs to write 10 or 20MB each time this will easily cause 20TB per day of writes or something like that.
If you got a filesystem with snapshot capabilities like qcow2/zfs or an aditional abstraction layer like lvm it is using copy-on-write and journaling and will cause a lot of extra write amplification. Just plain XFS without lvm/raid/snapshots might be an option for your SSD and then a daily/weekly backup of your VMs to a local HDD or to a network share so you can restore them if something happens.
I also tried to use HDDs for my VMs so I got no problem with SSD wearing but because of the high write amplification the number of IOPS was also multiplied by 10 so the HDDs just wasn't able to handle all the small writes.
If you are using xfs use "noatime" mount option to prevent a lot of unnessesary writes. If you use ext4 use "noatime" and "nodiratime" and maybe even disablejournaling (but I wouldn't do that).