I use a cluster of 3 nodes with each node 8 TB SSD (8x 1 TB) for ZFS and 8 TB nVME (4x 1,92 TB) for Ceph, Proxmox OS is running on separate disks in ZFS Raid 1. It's connected with a 10 Gbit network. I was wondering if you should consider memory considerations for Ceph and ZFS independently or can you combine it somehow.
For example as a rule of thumb for ZFS you can allocate approximately 1 GB RAM per 1TB of storage + 4 GB as a base. So in this case it would be 12 GB.
You edit the file /etc/modprobe.d/zfs.conf to adjust the value for "options zfs zfs_arc_max".
For Ceph you can allocate as a rule of thumb:
So in total only for the allocation of Proxmox (4 GB) + Ceph + ZFS storage would be alone 58 - 76 GB per node. This should be adjusted in the /etc/ceph/ceph.conf file with the osd_memory_target value.
Then we have to add memory for all the VMs. Is it possible for ZFS to share memory with Ceph?
For example as a rule of thumb for ZFS you can allocate approximately 1 GB RAM per 1TB of storage + 4 GB as a base. So in this case it would be 12 GB.
You edit the file /etc/modprobe.d/zfs.conf to adjust the value for "options zfs zfs_arc_max".
For Ceph you can allocate as a rule of thumb:
- monitors: in total 3 => 3 x (1-2) GB = 3 - 6 GB
- managers: in total 3 => 3 x (1-2) GB = 3 - 6 GB
- OSDs: in total 12 => 12 x (3-4) GB = 36 - 48 GB
So in total only for the allocation of Proxmox (4 GB) + Ceph + ZFS storage would be alone 58 - 76 GB per node. This should be adjusted in the /etc/ceph/ceph.conf file with the osd_memory_target value.
Then we have to add memory for all the VMs. Is it possible for ZFS to share memory with Ceph?
Last edited: