Pregunta

I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete.

/api/location/Londo

and the query would be something like

SELECT * FROM Locations WHERE Name like 'Londo%'

These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time.

Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware.

Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

¿Fue útil?

Solución

Since you want to follow best practices then using search engine like ElasticSearch or Solr. Not only they are fast and manage their own caches, they're also better equipped with different kinds of search methods.

Using/managing your own cache is a good idea too but I'd leave that as an optimisation rather than solution to your problem (searching) because it only gives you speed but not ease of searching.

For example, right now you only want to search location by user search string. If later you want to search location by geocode (because mobile clients can send their location) you will need to write your own solution if you want to use your DB. That kind of search is available from search engines. Other type of search that you might want to do is faceted search and there are more methods (like similar sounding string, etc.)

The hassle of going with using search engine is having to keep everything updated on both the database and the search engine. But it's a small price to pay.

Otros consejos

If you're building the SQL statement as a string, you can take a hash of that and use it as a key in a distributed cache.

Depending on how many variations there are and how much memory you have, you might want to include a "time to live" or a "least used algorithm" to keep the amount of memory within limits.

However @GrandmasterB is right -- most databases do really aggressive caching of query results, so you might end up duplicating caching that's already happening. The only improvement that comes to mind is if you cache the results locally in RAM (with a least-recently-used algorithm) to save a round trip to the database.

But, no, in general that's not a bad use of caching. The caveats are that you might be duplicating caching that's already happening. Also, if the query results change a lot, then you have the usual problems with data duplication -- the cached answer getting out of sync with the "real" answer.

If you wrapped your query around an ORM like Hibernate, Eclipse link, etc then the ORM will do some caching for you locally in your RAM. You save the network IO latency.

When the request arrives at the SQL server, it will do another round of caching, saving you disk IO.

But then since the data changes so frequently, you need to think about other way to improve performance rather than caching.

Licenciado bajo: CC-BY-SA con atribución
scroll top