I would post the complete specs of your current host hardware, the current resource allocations and future requirements of all of your VMs, and your budget and expectations for this expansion and any other constraints you can think of that you haven't shared. Also what country are you in?
Local+replicated ZFS will be the cheapest and give reasonable performance, but does not provide realtime hands-off HA. The storage does not scale beyond 1 box. Replication is done on an ad-hoc basis and not quite in realtime.
SAN will not be cheaper than local ZFS, and it will not be faster (all else being equal). Depending on the storage vendor's feature set you may also lose thin provisioning and snapshot capability unless explicitly provided. Additionally, it would be a mistake to assume that you get any redundancy from a single SAN enclosure. Yes, central shared storage can survive a host failure, but what about a storage or switch failure? Putting all your storage in one expensive basket does not change your situation much from where it is right now.
A good SAN solution would involve 2 or more replicated enclosures and LACP networking with a redundant switch stack. Somebody stop me if I'm off base, but the cost and overhead of SAN relative to the performance and reliability you get in return is extremely high and as a result I do not believe SAN deployments are particularly widely used in small-medium enterprise Proxmox environments. There are some very badass SAN products out there, with higher specs than my VM hosts, so I have to wonder about their price and the bottom line relative value to the small-medium end customer.
The third option, Ceph is at least a 100% free system, allowing more of your budget to be allocated towards nodes, drives, and network, and in a sufficient cluster will provide high performance, high integrity, high reliability, and dynamically configurable redundancy, but the overhead required to achieve this will seem high. Efficiency increases with cluster size. It's the most expensive free software ever made, but it will be cheaper than dedicated SANs, and it will be more reliable and come with a higher degree of automation and scalability than replicated ZFS.
A 4th option would be to proceed without any clustered physical infrastructure, and only pursue your availability/reliability requirements at the application layer. influx has its own clustering capability, so you could have totally separate influx server VMs participating in an influx cluster, and you can rig up multiples of your front-end services behind a haproxy+keepalived cluster of front-facing VMs.
App-layer redundancy can be combined with any physical layer redundancy as well. Just because your hosts and physical storage are redundant and online does not strictly mean that your prod services are always up, not hung, etc.
Just as an example, in one particular environment, I have a 5-node PVE+Ceph cluster on a 3/2 replicated storage pool, hosting 5 Percona xtraDB cluster VMs, 5 apache2 VMs, and 5 haproxy+keepalived VMs, behind a pair of pfSense VMs configured in CARP+HA. So I have a distributed hosting environment on distributed storage, with a distributed MySQL cluster on top of that being served from multiple load balanced web servers behind a dual firewall. In this scheme could you spontaneously yank drives, host, switch, or VM without noticeably impacting the front-end service, but my ratio of physical-to-prod resource is quite high (5x host, 15x data).