Standard exponential backoff as shown in the Google documentation is not the correct way to deal with rate limit errors. You will simply overload Drive with retries and make the problem worse.
Also, sending multiple updates in a batch is almost guaranteed to trigger rate limit errors if you have more than 20 or so updates, so I wouldn't do that either.
My suggestion is:-
- Don't use batch, or if you do, keep each batch below 20 updates
- If you get a rate limit, backoff for at least 5 seconds before retrying
- Try to avoid the rate limit errors by keeping your updates below 20, or keeping the submission rate below one every 2 seconds
These numbers are all undocumented and subject to change.
The reason for 3 is that there is (was, who knows) a bug in Drive that even though an update returned a rate limit error, it did actually succeed, so you can end up inserting duplicate files. See 403 rate limit on insert sometimes succeeds