This is more of an "Am I just stupid" post.
I have some legacy VMware VMs that I cannot move to Proxmox because of software licenses that are tied to (virtual) hardware. I have set up and configured our VMWare server to perform frequent backups. Of the /vmware mount point. It's standard stuff...
When we started using PBS there were no namespaces. so we needed a retention policy that worked for EVERYTHING. that became policy, and we have the storage to accommodate it so why bother changing policy. Most backups are Daily... so we retain 142 days of daily backups. RDP and virtual...
Victor covered this perfectly. But also this is our active example.
Yes that is what we do, We have local retention times that are not the same, our local retention times are geared more for user error, so short interval backups, that fade quickly over time.
you do have to make sure you do not...
OK, I know that this is not an issue that most people are dealing with.
2023-02-02T09:27:20-06:00: sync group vm/105 failed - unable to acquire lock on snapshot directory "/mnt/datastore/PBS-01_ZFS-1/vm/105/2023-02-02T15:30:03Z" - locked by another operation
We take backups of our Logging an...
That makes sense, I think I was just thinking about how to accomplish the task from the wrong direction. I think this would work too. which is closer to how I was imagining the problem. *:10/30. based on this in the provided documentation.
How would I express running a task every 40 min for...
So I'm sure this question has been answered but I must be using the wrong words in my search.
simply put. I need to add an offset to my schedule. I would like to Perform a Backup every half hour plus 10 min. so *:10 and *:40 what would be the best way to express this in a single argument? Or...
I have to admit. I don't use ZFS! My home lab and Work environment are Both Ceph, In addition I am a bad person who runs Proxmox Backup Server in a VM with HDDs OVER NFS! (this is changing).
so I need to do some research.
I'm not sure what else you would use your hypervisor for? Maybe these other tasks would be better suited to a LXC container? and not the Host OS? I am actually interested and would like to hear your use case.
if you want to make backups of a filesystem, within your host (but not a container)...
So one downside to having to use multiple Datastores would be a large Decrease in data de duplication factor.
EDIT: It also breaks the nightly catch all Backup, that always runs incase a new VM is not configured correctly or a backup is turned off temporarily.
So we have some servers that have different intervals for backups. 30m, (internal databases) 1h, (RDP servers) 24h, (Servers that don't have much local data but still have some regular changes.
We can configure purge policy in the cluster, But that will run purge and Garbage Collection as part...
I'm in a similar boat, Testing PBS in AWS Elastic Block Storage With Cold HDD (sc1) Volumes As an offsite backup. Our local backup is NFS based right now as we re-organize but will be SSD in the near future. I tried something similar with S3, but You really need to use some sort of Block storage...
Very True I cannot argue with that.
Also Very true.
In my research of PBS I got more of the sense that it is protecting from host destruction in the physical sense, Fire, Water, etc. We can also rest assured that a compromised VM cannot directly tamper with its own backups because the machine...
I see where you are coming from, and I agree it is a better strategy. But Push backups are an inescapable truth in the enterprise. Your most frequent vector of attack to backups will be a timebomb, where the attacker taints all of your backups for a month or two then activates the ransomware. If...
Ok, That makes sense,
So lets say I have a server that backs up every hour, could I limit the transfer to only the last backup of the day? or would I be forced to back up all of the individual, albeit smaller backups?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.