Question

I'm trying to upload a file to S3 using boto on a windows 7 machine, but i keep getting an error [Errno 10054] An existing connection was forcibly closed by the remote host

My code to interact with S3 looks like this

from boto.s3.connection import S3Connection
from boto.s3.key import Key

conn = S3Connection(Access_Key_ID, Secret_Key)
bucket = conn.lookup(bucket_name)

k = Key(bucket)
k.key = 'akeynameformyfile'
k.set_contents_from_filename(source_path_of_file_to_upload)

The upload works fine on the same machine using the AWS CLI with the following command

aws s3 cp filename.exe s3://bucketname/ttt

The file is about 200MB

My OS is Windows 7, python is running through anaconda with all packages up to date. Boto is version 2.25

This code all runs fine from a CentOS box on the same network. So is this a windows issue?

Any help would be much appreciated thanks! c

Debug Output Below

send: 'HEAD / HTTP/1.1\r\nHost: ACCESS_KEY_ID.test7.s3.amazonaws.com\r\nAccept-Encoding: identity\r\nDate: Wed, 14 May 2014 22:44:31 GMT\r\nContent-Length: 0\r\nAuthorization: AWS ACCESS_KEY_ID:SOME_STUFF=\r\nUser-Agent: Boto/2.25. 0 Python/2.7.5 Windows/7\r\n\r\n'
reply: 'HTTP/1.1 307 Temporary Redirect\r\n'
header: x-amz-request-id: 8A3D34FB0E0FD8E4
header: x-amz-id-2: PwG9yzOVwxy21LmcpQ0jAaMchG0baCrfEhAU9fstlPUI307Qxth32uNAOVv72B2L
header: Location: https://ACCESS_KEY_ID.test7.s3-ap-southeast-2.amazonaws.com/
header: Content-Type: application/xml
header: Transfer-Encoding: chunked
header: Date: Wed, 14 May 2014 22:44:31 GMT
header: Server: AmazonS3
send: 'HEAD / HTTP/1.1\r\nHost: ACCESS_KEY_ID.test7.s3-ap-southeast-2.amazonaws.com\r\nAccept-Encoding: identity\r\nDate: We d, 14 May 2014 22:44:31 GMT\r\nContent-Length: 0\r\nAuthorization: AWS ACCESS_KEY_ID:SOME_STUFF=\r\nUser-Ag ent: Boto/2.25.0 Python/2.7.5 Windows/7\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: x-amz-id-2: erataRIpbOrEwOU72VUAqU9AGJ4/kX5z1/UD7rJQy9laKDgOyTyVKABMab8f6wGN
header: x-amz-request-id: 2A7BECC45C9BAE7A
header: Date: Wed, 14 May 2014 22:44:33 GMT
header: Content-Type: application/xml
header: Transfer-Encoding: chunked
header: Server: AmazonS3
send: 'PUT /akeynameformyfile HTTP/1.1\r\nHost: ACCESS_KEY_ID.test7.s3.amazonaws.com\r\nAccept-Encoding: identity\r\nContent -Length: 242642944\r\nContent-MD5: xYOiNcyFKGY1Y/HsYwHQeg==\r\nExpect: 100-Continue\r\nDate: Wed, 14 May 2014 22:44:33 GMT\r\nUser- Agent: Boto/2.25.0 Python/2.7.5 Windows/7\r\nContent-Type: application/octet-stream\r\nAuthorization: AWS ACCESS_KEY_ID:pWs3KwRv9Q5wDnz4dHD3JwvCy/w=\r\n\r\n'

--------------------------------------------------------------------------- error Traceback (most recent call last)
in ()

 12 k = Key(bucket)                                                                                                            
 13 k.key = 'akeynameformyfile'                                                                                                

---> 14 k.set_contents_from_filename(full_path_of_file_to_upload)
C:\Users\username\AppData\Local\Continuum\Anaconda\lib\site-packages\boto\s3\key.pyc in set_contents_from_filename(sel f, filename, headers, replace, cb, num_cb, policy, md5, reduced_redundancy, encrypt_key)
1313 num_cb, policy, md5, 1314
reduced_redundancy,

-> 1315 encrypt_key=encrypt_key)
1316
1317 def set_contents_from_string(self, string_data, headers=None, replace=True,
C:\Users\username\AppData\Local\Continuum\Anaconda\lib\site-packages\boto\s3\key.pyc in set_contents_from_file(self, f p, headers, replace, cb, num_cb, policy, md5, reduced_redundancy, query_args, encrypt_key, size, rewind) 1244
self.send_file(fp, headers=headers, cb=cb, num_cb=num_cb,
1245 query_args=query_args,

-> 1246 chunked_transfer=chunked_transfer, size=size) 1247

return number of bytes written. 1248 return self.size

                                                                                                                                C:\Users\username\AppData\Local\Continuum\Anaconda\lib\site-packages\boto\s3\key.pyc

in send_file(self, fp, headers, c b, num_cb, query_args, chunked_transfer, size)

723         self._send_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb,                                                
724                                  query_args=query_args,                                                                    

--> 725 chunked_transfer=chunked_transfer, size=size)

726                                                                                                                            
727     def _send_file_internal(self, fp, headers=None, cb=None, num_cb=10,                                                    
                                                                                                                                C:\Users\username\AppData\Local\Continuum\Anaconda\lib\site-packages\boto\s3\key.pyc

in _send_file_internal(self, fp, headers, cb, num_cb, query_args, chunked_transfer, size, hash_algs)

912             headers,                                                                                                       
913             sender=sender,                                                                                                 

--> 914 query_args=query_args
915 )
916 self.handle_version_headers(resp, force=True)
C:\Users\username\AppData\Local\Continuum\Anaconda\lib\site-packages\boto\s3\connection.pyc in make_request(self, meth od, bucket, key, headers, data, query_args, sender, override_num_retries, retry_handler)

631             data, host, auth_path, sender,                                                                                 
632             override_num_retries=override_num_retries,                                                                     

--> 633 retry_handler=retry_handler
634 )
C:\Users\username\AppData\Local\Continuum\Anaconda\lib\site-packages\boto\connection.pyc in make_request(self, method, path, headers, data, host, auth_path, sender, override_num_retries, params, retry_handler)
1028 params, headers, data, host)
1029 return self._mexe(http_request, sender, override_num_retries,

-> 1030 retry_handler=retry_handler) 1031
1032 def close(self):

                                                                                                                                C:\Users\username\AppData\Local\Continuum\Anaconda\lib\site-packages\boto\connection.pyc

in _mexe(self, request, sende r, override_num_retries, retry_handler)

905                 if callable(sender):                                                                                       
906                     response = sender(connection, request.method, request.path,                                         

--> 907 request.body, request.headers)

908                 else:                                                                                                      
909                     connection.request(request.method, request.path,                                                       
                                                                                                                                C:\Users\username\AppData\Local\Continuum\Anaconda\lib\site-packages\boto\s3\key.pyc

in sender(http_conn, method, path , data, headers)

813                     http_conn.send('\r\n')                                                                                 
814                 else:                                                                                                      

--> 815 http_conn.send(chunk)
816 for alg in digesters:
817 digesters[alg].update(chunk)
C:\Users\username\AppData\Local\Continuum\Anaconda\lib\httplib.pyc in send(self, data)
803 datablock = data.read(blocksize)
804 else:
--> 805 self.sock.sendall(data)
806
807 def _output(self, s):
C:\Users\username\AppData\Local\Continuum\Anaconda\lib\ssl.pyc in sendall(self, data, flags)
227 count = 0
228 while (count < amount):
--> 229 v = self.send(data[count:])
230 count += v
231 return amount
C:\Users\username\AppData\Local\Continuum\Anaconda\lib\ssl.pyc in send(self, data, flags)
196 while True:
197 try:
--> 198 v = self._sslobj.write(data)
199 except SSLError, x:
200 if x.args[0] == SSL_ERROR_WANT_READ:
error: [Errno 10054] An existing connection was forcibly closed by the remote host

Was it helpful?

Solution

@garnaat made a suggestion in the comments above that solved this for me. Thanks!! For some reason, connecting to the universal endpoint and then trying to upload specifically to the ap-southeast-2 S3 endpoint fails. But if we use the connect_to_region function to initiate the connection and specify the endpoint we want, everything works a-ok! Thanks again, and working example below.

from boto.s3 import connect_to_region
from boto.s3.connection import Location
from boto.s3.key import Key

conn = connect_to_region(Location.APSoutheast2,
                         aws_access_key_id=conf.Access_Key_ID,
                         aws_secret_access_key=conf.Secret_Key)
bucket = conn.lookup(bucket_name) # bucket is located in Location.APSoutheast2

k = Key(bucket)
k.key = 'akeynameformyfile'
k.set_contents_from_filename(source_path_of_file_to_upload)

OTHER TIPS

set_contents_from_filename accepts a string as the source for the file on S3. You are trying to upload an existing file. Try set_contents_from_file to specify an existing file to upload. Otherwise, read the contents of the file as pass in to set_contents_from_filename as a string.

All of the above answers provide technical solutions but my first experience with this suggested that it was an environmental issue.

I have basically the same code as the original question. I had no problems the first time I ran it, and then ran into the same error message last night. I found this answer and many others but decided that since it (my code) worked well the first time (more than 20K files uploaded) I was probably having problems either on my machine or AWS was the source (and/or possibly some point in between).

I shutdown my computer, went home and when I restarted the this morning the original code worked fine.

One of the observations that led me to this conclusion is that while I was searching for the answer I noticed that many posts about the problem all seemed to be occurring at about the same time. This got me to thinking that maybe AWS was having problems. Again, I can't be sure if it was my computer, the AWS server or some traffic point - in-between but the original code ran fine when I went after it again this morning.

Here is the code I am using right now ( as you can see very similar to code in original question)

import boto
import glob
AWS_KEY = "mykey"
AWS_SECRET = "mySecret"
bucket_name = "myBucket"
numbers = [x for x in range(0,20000,500)]
to_upload = glob.glob('E:\MTurkTablesSelected\\*.htm')

s3 = boto.connect_s3(aws_access_key_id = AWS_KEY, aws_secret_access_key = AWS_SECRET)
bucket = s3.get_bucket(bucket_name)
for n, file_path in enumerate(to_upload):
    upload_key = file_path.split('\\')[-1]
    key = bucket.new_key(upload_key)
    key.set_contents_from_filename(file_path)
    key.set_acl('public-read')
    if n in numbers:
        print n

I am now more than 10,000 files into a large upload and not having any problems.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top