Question

I have moved 20000 files to AWS S3 by s3cmd command. Now i want to add cache-control for all images (.jpg)

These files are located in ( s3://bucket-name/images/ ). How can i add cache-control for all images by s3cmd or is there any other way to add header ?

Thanks

Was it helpful?

Solution

Please try the current upstream master branch (https://github.com/s3tools/s3cmd), as it now has a modify command, used as follows:

./s3cmd --recursive modify --add-header="Cache-Control:max-age=86400" s3://yourbucket/

OTHER TIPS

Also with the AWS's own client:

aws s3 sync /path s3://yourbucket/ --cache-control max-age=604800

My bucket has mp4, jpg, and other files. The files I wanted to update are stored in a "sub-bucket" (ex: https://s3.amazonaws.com/my.bucket/sub-directory/my-video.mp4). In my case I only wanted to update the cache control on mp4 files:

aws s3 cp \
   s3://my.bucket/sub-directory/ s3://my.bucket/sub-directory/ \
   --exclude '*.jpg' --exclude '*.png' \
   --cache-control 'max-age=31104000' \
   --recursive

To test out what this will do, you can use the --dryrun flag:

aws s3 cp --dryrun \
   s3://my.bucket/sub-directory/ s3://my.bucket/sub-directory/ \
   --exclude '*.jpg' --exclude '*.png' \
   --cache-control 'max-age=31104000' \
   --recursive

To adjust metadata such as cache-control on an object in S3 without having to re-upload it and without having to use any third party tools, you can do the following with the AWS CLI. It copies the object to itself while overriding the metadata with your chosen settings:

aws s3api copy-object --copy-source <bucket-name>/<file> --bucket <bucket-name> --key <file> --metadata-directive REPLACE --cache-control "max-age=3600"

Process this command in a find to do it on an existing set of files that already exists in the bucket as you mention:

find . -type f -exec aws s3api copy-object --copy-source <bucket-name>/{} --bucket <bucket-name> --key {} --metadata-directive REPLACE --cache-control "max-age=3600"

replace <bucket-name> with the name of your bucket

WARNING: this will overwrite all your existing metadata on the files such as acl, just add additional flags to the command to set what you need, e.g., --acl public-read to give full public access. (thanks @jackson)

If you want to avoid third party tools, and this is a one-time task, you can use the AWS console.

  1. Browse to your s3 bucket
  2. Select all of the objects you want to change
  3. Click Actions -> Change metadata
  4. Select Cache-Control for the key, input whatever control you want as the value
  5. Save
PUT / ObjectName HTTP/1.1 
Host: BucketName .s3.amazonaws.com 
Date: date 
x-amz-meta-Cache-Control : max-age= <value in seconds> 
Authorization: signatureValue 

Every Metadata setting contains a Key-Value pair. Cache control metadata key is “Cache-Control” and Value is “max-age=<time for which you want your object to be accessed from cache in seconds>”

You can set Cache Control Custom Header for Amazon S3 Objects by sending HTTP PUT Request to Amazon S3 Server with appropriate headers in two ways:

Set Cache Control Metadata using Amazon S3 REST API PUT Object Request - If you are a programmer, you can write your own software program to use Amazon S3 REST or SOAP APIs to set Custom Headers with PUT Object Request. This website only refers to Amazon S3 REST APIs, please refer to AWS documentation website for details on how to use SOAP APIs. Set Cache Control Metadata using Bucket Explorer User Interface - If you like to set custom HTTP Headers like Cache Control using mouse clicks instead of writing a software program, you can use Bucket Explorer's user interface for that. With this Custom HTTP Header, you can specify the caching behavior that must be followed with the request/response chain and to prevent caches from interfering with the request or response.

for more information please check How to Set Cache Control Header for Amazon S3 Object?`

(Since the OP asked for any other way)

You can also do it via aws-cli, e.g. (v: aws-cli/1.8.8 Python/2.7.2 Darwin/12.5.0):

aws s3api put-object \
--bucket mybucket \
--key my/key \
--cache-control max-age=1 \
--body myfile.txt

Although please note that you will rewrite any existing object.

This is honestly the best way atm without running into errors mentioned in the other answers:

aws s3 cp s3://my-bucket/ s3://my-bucket/ --recursive --metadata-directive REPLACE \
--expires 2034-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public

Just upgrade the s3cmd to version 1.5.1 and the issue will resolve.

Another really simple way to do this is to use S3 Browser: http://s3browser.com/ You can simply just shift click or ctrl+a to select all the images you want; then just go to the 'Http Headers' tab and click - 'Add new header' & then 'Apply changes'. It automatically kept all my other permissions and headers.

If you use S3 alot; its a sweet app anyways esp if you have enormous uploads (there's nothing better in the world of ftp, dropbox or otherwise!)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top