I have a 4-node PVE cluster with bonded 10gb NICs. Ceph storage is currently setup with 4 1TB SSD OSD's on each node.
I plan on adding at least 4 platter drives (10TB HDD) per node for larger storage. My plan was to edit the crush rule so that boot drive pools (ie. ceph-lxc, and ceph-vm) would only utilize the SSD's (by class), and then create a pool called ceph-workspace that only utilized the HDD's (again by class).
For all containers created I want to have bind mounted /workspace directory with quotas around 250GB, with maximums around 1TB. All data written to the /workspace mount would go to the platter drives. A lot of the containers will be dealing with "large" data sets, and I don't want that data included in the container backups, ballooning them to an unmanageable size. However, I want this bind mounted workspace directory to be available for highly available containers.
Is creating a CephFS utilizing the ceph-workspace pool (utilizing the proper underlying crush rule of course) the appropriate method for this, or is there a better approach? I read that a ceph pool will only operate as the slowest disk...will the crush rule alleviate this limitation, so that the boot drives are only limited by the slowest SSD, and the /workspace mounts are limited by the slowest HDD? How do I properly implement the quota for this bind mount? I don't want to add this as a disk under resources, because then it would be included in the backup of the container...correct?
I was also contemplating creating an /archive bind mount in each container that would point to an NFS mount (from external NFS server) created in the Proxmox cluster. The bind mount would mount in only the appropriate ZFS fileset (with quotas approaching 20-30TB) from the external NFS server. Wondering if there's a better method other than NFS for this? I was also thinking the external NFS server would also provide Samba mounts for Windows VMs, and NFS mounts for fully virtualized *nix machines.
Any best practice inputs would be appreciated.
I plan on adding at least 4 platter drives (10TB HDD) per node for larger storage. My plan was to edit the crush rule so that boot drive pools (ie. ceph-lxc, and ceph-vm) would only utilize the SSD's (by class), and then create a pool called ceph-workspace that only utilized the HDD's (again by class).
For all containers created I want to have bind mounted /workspace directory with quotas around 250GB, with maximums around 1TB. All data written to the /workspace mount would go to the platter drives. A lot of the containers will be dealing with "large" data sets, and I don't want that data included in the container backups, ballooning them to an unmanageable size. However, I want this bind mounted workspace directory to be available for highly available containers.
Is creating a CephFS utilizing the ceph-workspace pool (utilizing the proper underlying crush rule of course) the appropriate method for this, or is there a better approach? I read that a ceph pool will only operate as the slowest disk...will the crush rule alleviate this limitation, so that the boot drives are only limited by the slowest SSD, and the /workspace mounts are limited by the slowest HDD? How do I properly implement the quota for this bind mount? I don't want to add this as a disk under resources, because then it would be included in the backup of the container...correct?
I was also contemplating creating an /archive bind mount in each container that would point to an NFS mount (from external NFS server) created in the Proxmox cluster. The bind mount would mount in only the appropriate ZFS fileset (with quotas approaching 20-30TB) from the external NFS server. Wondering if there's a better method other than NFS for this? I was also thinking the external NFS server would also provide Samba mounts for Windows VMs, and NFS mounts for fully virtualized *nix machines.
Any best practice inputs would be appreciated.
Last edited: