I have a web.py app running under OpenShift with scaling enabled. The HAProxy process is constantly checking the index document over HTTP 1.0 to see if my app is up. For example, the logs look like:
==> python/logs/appserver.log <==
127.2.1.129 - - [2013-12-25 13:32:09] "GET / HTTP/1.0" 200 7297 1.240403
127.2.1.129 - - [2013-12-25 13:32:12] "GET / HTTP/1.0" 200 7297 0.676904
127.2.1.129 - - [2013-12-25 13:32:15] "GET / HTTP/1.0" 200 7297 1.421824
127.2.1.129 - - [2013-12-25 13:32:18] "GET / HTTP/1.0" 200 7297 0.730807
127.2.1.129 - - [2013-12-25 13:32:21] "GET / HTTP/1.0" 200 7297 1.153252
127.2.1.129 - - [2013-12-25 13:32:24] "GET / HTTP/1.0" 200 7297 0.828387
127.2.1.129 - - [2013-12-25 13:32:28] "GET / HTTP/1.0" 200 7297 1.390523
127.2.1.129 - - [2013-12-25 13:32:31] "GET / HTTP/1.0" 200 7297 0.964495
127.2.1.129 - - [2013-12-25 13:32:35] "GET / HTTP/1.0" 200 7297 2.574379
127.2.1.129 - - [2013-12-25 13:32:39] "GET / HTTP/1.0" 200 7297 2.011549
Here's the problem. For every request, my app is creating a separate session instance. I'm using the DiskStore for sessions and so a separate file is created for every request and I'm quickly hitting up against OpenShift's 80,000 file limit (after 3 days or so of constantly creating sessions like this). The weird thing is that my index getter does not utilize or access the session variable at all. In fact, I've tried changing it to just return "Hello World" and the objects still get created every 2-5 seconds.
I need to either A) Limit how often HAProxy is testing my site, or more ideally B) Not create a session for every connection
Any ideas? I have a pretty standard web.py session initializer at the top of my main app file (not within any method):
session = web.session.Session(app, web.session.DiskStore(os.path.join(curdir,'sessions')), initializer={'request_token': '', 'request_token_secret': ''})