What strategies do you use for fetching and caching paginated collections, specifically in angular or restangular?

StackOverflow https://stackoverflow.com/questions/22182201

문제

Let's say an app consumes data from a REST api that exposes paginated collections, and the number of items in a collection can be infinitely large (i.e. too many items to fetch all at once and store in a browser session).

What are good strategies for fetching, caching and invalidating the cache of such data?

Say, the API is like,

GET /users?page=1&per_page=50

With a response like below (i.e. includes the collection data + paging meta data)

{
  "paging": {
    "per_page": 20,
    "page": 1,
    "previous": null,
    "self": "/api/v1/users?per_page=20&page=1",
    "next": "/api/v1/users?per_page=20&page=2"
  },
  "data": [
    // first 20 users
  ]
}

The client may or may not load the collection in contiguous pages, one page at a time. The users in the collection may be changed, by any client. Eventual consistency is sufficient and I'm not concerned about write conflicts at this time.

The question is in the context of an angularjs app, but is a generic scenario.

올바른 솔루션이 없습니다

다른 팁

What I'd do/did is use angular's $cacheFactory and have one entry for each API endpoint. So if your current URL is /api/v1/users?per_page=20&page=1 the cache's key would be /api/v1/users without any pagination parameters. Then once the response is returned do the following in the function which handles the data:

// add customer insert method somewhere
Array.prototype.insert = function (index, item) {
  this.splice(index, 0, item);
};

var cachedData = $cacheFactory.get(key);

var insertPosition = pageSize * currentPage;

cachedData.insert(insertPosition, returnedData);

$cacheFactory.put(cachedData);

Probably a lot simpler than it would have to be in a real life scenario, but you get the idea.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top