Yes, there are other models using erasure coding (similar to RAID6) but you should always have copies on 3 (getting your data in case of 2 simultaneous node failures) and it is a good idea to have the "RAID" across nodes instead of across disks. So you could do k=3,m=2 and get a RAID6-like with 5 nodes where 3 nodes handle data and 2 handle the erasure code, but obviously now you need to access "at least" 3 nodes instead of 1 to get your data, driving up latency and CPU usage (just like regular RAID, same problem with disk latency, even on NVMe can become noticeable) hence why mirroring (3-way mirror like Ceph) is the gold standard even on other hypervisor platforms.
All depends on your budget, performance demands etc, storage is relatively cheap in comparison with the cost of downtime in most cases, my cluster has 100% uptime for over 5 years now despite every few months updating and rebooting every node, upgrading, removing, repairing nodes etc, I just found a VM that has reached 1200 days of uptime (we do updates, but we also have kpatch, so it never needs rebooting).
Proxmox supports other shared storage like SMB, NFS, iSCSI (if you want to use the old VMware storage still for some data stores) but everything has its pros and cons (latency, bandwidth, failure modes, redundancy, backup), that is entirely up to you how you decide to go about this. You could even do ZFS or LVM on each node and then replicate snapshots of the VMs automatically on a 15m basis, provided you can potentially lose 15m of data, for some people that is an option and ZFS/LVM storage with spinning disks.
Ceph focuses on high throughput and data reliability with costs relatively higher because it ideally has at most 12-24 disks per node (we have 8 or 12 depending on when it was purchased), so use it with NVMe storage and you get to scale to potential Terabit throughputs with just a handful of nodes. Then we replicate that to an offsite Proxmox Backup Server with spinning disks, which allows for continuous backups and live restore.