Question

I'm currently working on migrating the file backup system for my office to Amazon S3. The basic backup is working like a charm but I'm looking to make it a little more robust. Specifically I am looking to add version control for the files in the bucket (using Amazon "Versioning") but there is no mention (that I can find) of a way to limit the number of old versions stashed per file (ie: "File x" has a maximum of 5 previous versions at any given time).

Is it possible to place limits on the number of versions stored per file? Or am I stuck with an unlimited propagation of file versions over time if I go this route?

I've been digging through the AWS forums (as well as anywhere else I look) and have yet to find anything. Any info would be greatly appreciated. Thanks!

Was it helpful?

Solution 2

Currently you would need to list and erase old versions manually, as there is no user defined version limit.

OTHER TIPS

While I don't believe Amazon added the option to limit the versions to a specific count of revisions (say "5" previous revisions), you can certainly use S3 lifecycle management, which does support rules based on time. You can use NoncurrentVersionTransition to transition the old version to a different storage class, and you can use NoncurrentVersionExpiration to finally delete an old revision.

So if you backup your data at a predefined time interval (say once a week), then if you set your NoncurrentVersionExpiration to 6 weeks, you'll only keep the last 5 (maybe 6?) versions.

Reference: AWS S3 Lifecycle

YES is's possible @paul-littlefield @user3606089

creating a lifecycle rule, you can specify when non current version of the objects will expire and "Number of newer versions to retain - Optional"

If you want to keep a certain number of versions, despite how old they are, set the expiration to 1 day and the number of versions you want to keep in the second field.

settings

behaviour

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top