I'm trying to paralellize the retrieval of data from a remote API. The remote API doesn't have any bulk capability, so for each object I need, I have to make a separate GET request.
I've added gevent into the mix. It works great sometimes, but if I try the same set of requests again, 50 of 100 will fail with this:
Traceback (most recent call last):
...
File "/Users/---/venv/lib/python2.7/site-packages/httplib2/__init__.py", line 1570, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/Users/---/venv/lib/python2.7/site-packages/httplib2/__init__.py", line 1317, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/Users/---/venv/lib/python2.7/site-packages/httplib2/__init__.py", line 1258, in _conn_request
raise ServerNotFoundError("Unable to find the server at %s" % conn.host)
ServerNotFoundError: Unable to find the server at my.remote.host
<Greenlet at 0x10c6eacd0: function_name(<Object1>, <Object2>, u'zebra', True)> failed with ServerNotFoundError
Any thoughts on ways to resolve this? Is this a result of too many requests too quickly? If so, is there an easy way to throttle the number of greenlets?