I've tried searching the forum and found lots of similar questions, but nothing that directly answers my use case.
We have primarily been a hyper-v shop up until our SAN recently died and we decided to upgrade our kit and go the proxmox + ceph route. We have a 3-node cluster, each node having the following setup:
CPU: 64 x Intel(R) Xeon(R) Gold 6246R CPU @ 3.40GHz (2 Sockets)
RAM: 1TB per node
Storage: 2x 960GB SSDs configured as software ZRAID1 during installation for Proxmox boot drives
CEPH Storage: Sonnet M.2 8x4 PCIE-4 cards with 8x 4TB Samsung Pro 990 4TB nvme drives installed, (96TB in total)
Network:
My understand is that RAW is better for performance, but doesn't have the feature set of QCOW2, such as snapshotting, so the VM would have to be shut down to back it up? Is this correct as it seems mad in this day and age to have to take a server offline to back it up. RAW also wouldn't allow incremental backups, right?
This then lead me to start reading into ZFS, which appears a better solution, but the more I read, the more I get confused. ZFS isn't a file format, so I still need to decide on RAW vs QCOW2 for the VM file format, and I guess I need to decide on Ceph vs ZFS for file system for the VM files to be stored on, as thats the only real comparison google seems to offer,
Can you use ZFS with Ceph, should it be done this way? or am I just loosing my mind.
Please could someone help me understand the above in simple terms so I can wrap my head around this.
We have primarily been a hyper-v shop up until our SAN recently died and we decided to upgrade our kit and go the proxmox + ceph route. We have a 3-node cluster, each node having the following setup:
CPU: 64 x Intel(R) Xeon(R) Gold 6246R CPU @ 3.40GHz (2 Sockets)
RAM: 1TB per node
Storage: 2x 960GB SSDs configured as software ZRAID1 during installation for Proxmox boot drives
CEPH Storage: Sonnet M.2 8x4 PCIE-4 cards with 8x 4TB Samsung Pro 990 4TB nvme drives installed, (96TB in total)
Network:
- 10GB for cluster network
- 100GB dedicated network for the Ceph storage
- 25GB network passed over bridge to VMs
My understand is that RAW is better for performance, but doesn't have the feature set of QCOW2, such as snapshotting, so the VM would have to be shut down to back it up? Is this correct as it seems mad in this day and age to have to take a server offline to back it up. RAW also wouldn't allow incremental backups, right?
This then lead me to start reading into ZFS, which appears a better solution, but the more I read, the more I get confused. ZFS isn't a file format, so I still need to decide on RAW vs QCOW2 for the VM file format, and I guess I need to decide on Ceph vs ZFS for file system for the VM files to be stored on, as thats the only real comparison google seems to offer,
Can you use ZFS with Ceph, should it be done this way? or am I just loosing my mind.
Please could someone help me understand the above in simple terms so I can wrap my head around this.