I have virtual machine which stores some data (artifactory). This VM is constantly growing. To avoid situation where i would need to expand this VM's drive from time to time i just created CT with NFS which servers some share for VM. In CT which servers the share, this folder is bind-mounted to some host's directory. Well in fact this is not a simple directory but ZFS dataset. In VM share is mounted and linked to directory with artifactory.
This way storage is limitless, i dont have to rise quote when storage is full. And also it is "backupable" because it is dataset so i just make snapshot and compress it to archive. I just made some script to backup both CT and dataset.
BUT! I feel "in my guts" there must be better/more proper way of achieve this goal. What i dont like in my solution is that:
- breaks the rule of "clean host"
- needs some additional "hacky" ct options to work (binding and lxc.apparmor)
- brings some vulnerability (lxc.apparmor unconfined and 777 on nfs share)
So are there better ways to achieve this goal (limitless storage for VM)? At this moment the only alternative i see is separate disks pool passed through directly to VM.
This way storage is limitless, i dont have to rise quote when storage is full. And also it is "backupable" because it is dataset so i just make snapshot and compress it to archive. I just made some script to backup both CT and dataset.
BUT! I feel "in my guts" there must be better/more proper way of achieve this goal. What i dont like in my solution is that:
- breaks the rule of "clean host"
- needs some additional "hacky" ct options to work (binding and lxc.apparmor)
- brings some vulnerability (lxc.apparmor unconfined and 777 on nfs share)
So are there better ways to achieve this goal (limitless storage for VM)? At this moment the only alternative i see is separate disks pool passed through directly to VM.