I have a file on amazon S3 that has the suffix ".bin.gz". I want web browsers to treat it as a gzipped HTML file. So, I am able to go into the Amazon S3 web console (https://console.aws.amazon.com/s3/home) and navigate to the file and select it. There, under properties, I can go to the Metadata tab and add the following directives:

Content-Type: text/html
Content-Encoding: gzip

This works as expected. That's the easy part.

Now, I would like to do the same thing with hundreds (or possibly millions) of files at the time that they are PUT on S3.

I have tried using s3cmd with the --add-header option, however that gives me a signature error when I try to set the Content-Type. Further, I am pretty sure that doing that will only affect the headers that are sent at the time of the PUT operation, and not the metadata that is stored with the document.

So, I am looking for a way to do this, ideally with s3cmd. If that is not possible, I would appreciate if somebody could suggest a python library that is capable of applying metadata to a file on s3.

There must be a way to do this without having to manually set it in the console.

有帮助吗?

解决方案

Use -m for setting the MIME type (which will be sent as the Content-Type header on subsequent requests:

s3cmd -m text/html --add-header='Content-Encoding: gzip' put [files] s3://...
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top