Question

I'm querying the GitHub API from the client using JavaScript (on this page).

There are 14 API calls each time the page loads, which means I will end up hitting GitHub's API rate limit of 5000 calls per hour pretty fast.

Most caching strategies I've seen assume that you have access to a server, but in my case I'm running a purely static Middleman site.

So my question is this: how can I cache API requests from the client? Are there third-party apps that provide this service?

(Note that my use case is many different clients hitting the page (e.g. it has been linked from Hacker News), not a single client refreshing. So local caching wouldn't really help much. )

Was it helpful?

Solution

Agreed with Firebase or separate data store alternative from @David so you can create a persistent cache mechanism since you don't have access to the server where the application sits. It's basically another data store and you can update your logic in Middleman to either make a fresh call to the Github api or to pull from data saved in Firebase based on some checks you do when a person visits that Translation page. Check out the logic here

Web sequence diagram for checking Firebase and using Github API

OTHER TIPS

You can cache a single client's page by using local storage or a cookie. This way if the user refreshes, you can have logic to see if you want to query the API again. This would be fine if your user base was small.

This type of caching is typically done on the server since you are limiting yourself to ~357 users per hour at best.

To cache on the client side, store the data in local storage and log the time of the query. Then decide on an interval (let's say 5 minutes). Then prior to any refresh or page load, look a the users local storage and see if the query was within the last 5 minutes. If it was, read from the local storage. If not, then query the API again. This only applies to each user but by querying every 5 minutes, it would allow you to say ~30 users per hour.

http://diveintohtml5.info/storage.html

No server, eh? You could use something like Parse. Make a Parse object, set the key to the particular GitHub API URI, and set the value to something like this:

{
  stored: <Date>,
  value: <stringified JSON returned from GitHub API call>
}

Then when someone hits your client, first call Parse to see if you already have a cached version for that particular API call. If you don't, make the call to GitHub's API and then store the results on Parse (with stored set to the current DateTime so you can check for staleness later).

If Parse does have a cached version stored, check the stored value to see how old it is - if it is stale, make a fresh call to GitHub, and store the results back into Parse. Otherwise, just parse the JSON string from value and you're good to go.

This is assuming that you want individual caching control over the 14 GitHub API calls. If you don't, then just store the compiled calls into one object on Parse under a key like cache.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top