Question

I'm an IT guy (read: not a professional programmer) and have made an internal monitoring webtool which allows users to search for printers on our print servers. Based on the filtering criteria given, it returns a series of divs representing the matching printers. Each printer div contains a bunch of live information about the relevant printer.

JS handles actually populating the divs in an asynchronous AJAX-y way, because live information is being polled from the printers, and that can take a while for each printer, and different printers respond faster or slower, or not at all.

Each AJAX call made by JS processes some PHP which, among other things, pulls data from one or more CSV files containing information about how to talk to the different printers. However this design means that every single printer that is being polled for a given search must re-read these files, as well as do a bunch of processing of the data pulled from the files.

Granted, the tool works fine like this, and has for years, but I've always wanted to optimize it. The repeated file reads seem wasteful. But given the AJAX implementation, I've not been able to conceive of a way around this. It would be nicer if this data could somehow be read only once, stored in memory, and accessed as necessary by all AJAX calls prompted by a given search. But I have no idea how that would work, since each call is a separate PHP process.

I suppose an efficient database (instead of using files) is the obvious answer, but I've always avoided that for several reasons:

  • Nothing about this tool needs to store new information, or bank historical information. All the data kept in the CSV files is only updated occasionally, by an external scheduled task, or manually as necessary.
  • Everything about this tool is intended to be real-time data (i.e. uncached, and user-agnostic).
  • A database is just another dependency, that requires maintaining, and is separate from the tool's text files, lowering the portability and increasing the complexity of maintaining the tool.
  • It still doesn't solve the problem of doing a bunch of repeat (albeit asynchronous) processing of the pulled data.

So maybe my preferences stated above preclude me from the privilege of making the desired improvements, and that's fine. But I wanted to ask if there's any potential solutions you can think of that do fit into this design.

Thanks for your time.

Was it helpful?

Solution

What you are looking for is called caching, and you've got two options for it, Client Side and Server Side.

For Client Side caching, what you would do would be to have the JavaScript code run the normal AJAX flow, but then persist the returned AJAX data into the browser's various storage mechanisms (cookies, localstorage, etc). I'd prob go with localStorage for this choice since its easy to use, handles lots of data, and pretty secure right out of the box. Now, in the javaScript code, you'll check to see if the data is in localStorage, and if so, you skip making the AJAX call entirely and just return the data out of localstorage.

For Server Side caching, you will basically tell the PHP runtime how to return a result from a pageload (which is fired via AJAX in your case) without having to re-lookup or process a lot of information. I don't know PHP well enough to suggest any particular code, but caching is a seriously standard practice, so just google "cache page output in PHP" and see what you find. Now, your JavaScript won't need to change.

Assuming you can find some PHP caching example, I'd go that route personally.

Licensed under: CC-BY-SA with attribution
scroll top