문제

How do you set content type on a file in a webhosting-enabled S3 account via the Python boto module?

I'm doing:

from boto.s3.connection import S3Connection
from boto.s3.key import Key
from boto.cloudfront import CloudFrontConnection

conn = S3Connection(access_key_id, secret_access_key)
bucket = conn.create_bucket('mybucket')
b = conn.get_bucket(bucket)
b.set_acl('public-read')

fn = 'index.html'
template = '<html>blah</html>'
k = Key(b)
k.key = fn
k.set_contents_from_string(template)
k.set_acl('public-read')
k.set_metadata('Content-Type', 'text/html')

However, when I access it from http://mybucket.s3-website-us-east-1.amazonaws.com/index.html my browser prompts me to download the file instead of simply serving it as a webpage.

Looking at the metadata in the S3 Management console shows the Content-Type has actually been set to "application/octet-stream". If I manually change it in the console, I can access the page normally, but if I run my script again, it resets it back to the wrong content type.

What am I doing wrong?

도움이 되었습니까?

해결책

The set_metadata method is really for setting user metadata on S3 objects. Many of the standard HTTP metadata fields have first class attributes to represent them, e.g. content_type. Also, you want to set the metadata before you actually send the object to S3. Something like this should work:

import boto

conn = boto.connect_s3()
bucket = conn.get_bucket('mybucket')  # Assumes bucket already exists
key = bucket.new_key('mykey')
key.content_type = 'text/html'
key.set_contents_from_string(mystring, policy='public-read')

Note that you can set canned ACL policies at the time you write the object to S3 which saves having to make another API call.

다른 팁

For people who need one-liner for this,

import boto3
s3 = boto3.resource('s3')

s3.Bucket('bucketName').put_object(Key='keyName', Body='content or fileData', ContentType='contentType', ACL='check below')

Supported ACL values:

'private'|'public-read'|'public-read-write'|'authenticated-read'|'aws-exec-read'|'bucket-owner-read'|'bucket-owner-full-control'

Arguments supported by put_object can be found here, https://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.put_object

I wasn't able to get the above solution to actually persist my metadata changes.

Perhaps because I was using a file and it was resetting the content type using mimetype? Also I am uploading m3u8 and ts files for HLS encoding so that could interfere as well.

Anyway, here's what worked for me.

import boto
conn = boto.connect_s3()
bucket = conn.get_bucket('mybucket')
key_m3u8 = Key(bucket_handle)
key_m3u8.key = s3folder+"/"+s3keyname
key_m3u8.metadata = {"Content-Type":"application/x-mpegURL","Cache-Control":"public,max-age=8"}                             
key_m3u8.set_contents_from_filename("path_to_my_file", policy="public-read")

If you use AWS S3 Bitbucket Pipelines Python add the parameter content_type:

s3_upload.py

def upload_to_s3(bucket, artefact, bucket_key, content_type):
...

def main():
...
    parser.add_argument("content_type", help="Content Type File")
...

if not upload_to_s3(args.bucket, args.artefact, args.bucket_key, args.content_type):

then modify bitbucket-pipelines.yml as follow:

...
- python s3_upload.py bucket_name file key content_type 
...

Where content_type param can be one of the following: MIME types (IANA media types)

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top