Question

I have a zip file loaded into memory (do not have it persisted on disk). The zip file contains jpg images. I am trying to upload each jpg into s3 but am getting an error.

# already have an opened zipfile stored in zip_file
# already connected to s3

files = zip_file.namelist()

for f in files:
    im = io.BytesIO(zip_file.read(f))
    s3_key.key = f
    s3_key.set_contents_from_stream(im)

I get the following error:

BotoClientError: BotoClientError: s3 does not support chunked transfer

What am I doing wrong?

Was it helpful?

Solution

Here is the solution. I was over thinking the problem.

files = zip_file.namelist()

for f in files:
    data = zip_file.read(f)
    s3_key._key.key = f
    s3_key._key.set_contents_from_string(data)

That's all it took.

OTHER TIPS

Boto supports other storage services, such as Google Cloud Storage, in addition to S3. The set_contents_from_stream method only works for services that support chunked transfer (see https://codereview.appspot.com/4515170). S3 does not support that (See their Technical FAQs at http://aws.amazon.com/articles/1109.)

It's unfortunate, but you can't upload from a stream to S3.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top