In PHP, the singleton paradigm is considered "bad practice".
See ref: Best practice on PHP singleton classes
Although your SFTP Class does not appear to actually implement singleton logic in the example above.
Your next challenge is that when you call a webpage as you are, the entire webpage is going to be reloaded. With out some sort of server side caching, there is no way in your current architecture to only update a portion of the page. As mentioned in the comments, AJAX is going to be your tool of choice to do this.
Singletons simply do not exist BETWEEN browser calls to the web server in PHP. Every time your browser hits the web server, it will necessarily create a new SFTP connection. Every time your web server completes the request, it will destroy the SFTP connection it was using.
The Methodological Pointer you are looking for:
- Have a page that loads the initial UI (lets call it a view), in the browser. You probably need to provide some basic configuration at a minimum, you might simply have it load 100% on this first call.
- Develop some other page(s) that just serve data, preferably in json format.
- Have javascript (ajax) in the view that makes discrete calls back to the server for data (your tables).
- As you click around the page, the ajax makes apropo discrete calls to the correct page to serve up the right data, and then refreshes/updates the correct element in the UI.
I am personally a fan of Memcached for caching session data between server requests. That being said, there are a large number of caching services that can be used for holding the directory listings on the webserver until it needs to be refreshed from the SFTP server.
As you research the optimum cache solution for your challenge, it is worth while to ensure you understand the difference between opcode caching (There are many opcode caches available for consumption. You have APC, XCache, eAccelerator and Zend Platform.) and data caching (session, variable, userland - we recommend memcached).
However, if your data is large enough (>1MB) you don't typically want to cache that in anything like memcached, you would want to cache it to the local filesystem, here is an example of how I recently did this for a very large array.
/**
* Will serialize, then write the array to disk, returning the filePath
*
* @param array $array
* @param string $filePath
* @return string
*/
function putCacheData(array $array, $filePath = NULL){
if (empty($filePath)){
$filePath = tempnam(NULL, 'IMPORT');
}
$serializedData = serialize($array);
file_put_contents($filePath, $serializedData);
return $filePath;
}
/**
* Reads the file, unserializes the data, and returns the array.
*
* @param string $filePath
* @return Array|FALSE
*/
function getCacheData($filePath){
$array = array();
if (empty($filePath)){
logmessage("The filepath: [$filepath] is empty!");
return $array;
}
if (! is_file($filePath)){
putImportData($array, $filePath);
return $array;
}
return unserialize( file_get_contents( $filePath ) );
}
Then we just store the $filePath
in the users session data (which goes to memcached for us) and as I bootstrap each request, I can check the session for the path, load the cached data, determine if it is expired or not, and optionally refresh it. Just ensure to write the data to file before the active request ends.