Hi all,
I'd like to ask for input on something I'm considering while migrating and consolidating some machines.
Current situation:
* Multiple machines (some containers on PVE, some bare metal) have multiple years of data
* New data, in aggregate over these installations, is added at about 50-100 GB per month
* Older data is rarely edited, and not frequently read
* Backup to PBS is already a multi-hour exercise
Target situation:
* One larger PVE, that hosts data from the multiple current sources
* Backups frequent enough to be able to rely on them to lose less than a week of data in case of disaster
* On-line availability of the whole catalog (on-line as in: "no tape drive / bluray / USB HDD", not as in "everything needs to be accessible by smartphone worldwide")
Considering backups already take a while, I thought of tiering the storage, and have three containers running storage servers instead of one:
* One would hold data from the last 6-9 months or so, for the near future requiring less than a TB of storage. It would see weekly backups.
* The next would hold data up to two / two and a half years ago, requiring some two TB of storage. As it sees few edits, it could do with a backup every quarter (ie, after refresh from the above tier)
* The last one would hold historic data, 2 years and older. It would see an influx of about half a TB each half year; with retention of backups in the tier above, one backup per year should do
* The storage servers in each of the containers would provide year-based directories that are mounted inside the container that hosts the front-facing server.
This is on a home network; data links are gigabit ethernet, with no near-term option of upgrading. The main container now runs Yunohost with Nextcloud as target for the data.
I guess storage tiering is standard practice once data becomes unwieldy. Are there some best practices in relation to Proxmox? Is there a more sane method than I described above?
I'd like to ask for input on something I'm considering while migrating and consolidating some machines.
Current situation:
* Multiple machines (some containers on PVE, some bare metal) have multiple years of data
* New data, in aggregate over these installations, is added at about 50-100 GB per month
* Older data is rarely edited, and not frequently read
* Backup to PBS is already a multi-hour exercise
Target situation:
* One larger PVE, that hosts data from the multiple current sources
* Backups frequent enough to be able to rely on them to lose less than a week of data in case of disaster
* On-line availability of the whole catalog (on-line as in: "no tape drive / bluray / USB HDD", not as in "everything needs to be accessible by smartphone worldwide")
Considering backups already take a while, I thought of tiering the storage, and have three containers running storage servers instead of one:
* One would hold data from the last 6-9 months or so, for the near future requiring less than a TB of storage. It would see weekly backups.
* The next would hold data up to two / two and a half years ago, requiring some two TB of storage. As it sees few edits, it could do with a backup every quarter (ie, after refresh from the above tier)
* The last one would hold historic data, 2 years and older. It would see an influx of about half a TB each half year; with retention of backups in the tier above, one backup per year should do
* The storage servers in each of the containers would provide year-based directories that are mounted inside the container that hosts the front-facing server.
This is on a home network; data links are gigabit ethernet, with no near-term option of upgrading. The main container now runs Yunohost with Nextcloud as target for the data.
I guess storage tiering is standard practice once data becomes unwieldy. Are there some best practices in relation to Proxmox? Is there a more sane method than I described above?