Question

I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.

When I make requests to the webserver, every other response will be a 500 error.

The error is:

Traceback (most recent call last):
  File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
    context)
  File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
    cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac


The above exception was the direct cause of the following exception:

sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac

The error is triggered by a simple Flask-SQLAlchemy method:

result = models.Event.query.get(id)

uwsgi is being managed by supervisor, which has a config:

[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true

and uwsgi's config looks like:

[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv =  /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data

The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.

Was it helpful?

Solution

The issue ended up being uwsgi's forking.

When working with multiple processes with a master process, uwsgi initializes the application in the master process and then copies the application over to each worker process. The problem is if you open a database connection when initializing your application, you then have multiple processes sharing the same connection, which causes the error above.

The solution is to set the lazy configuration option for uwsgi, which forces a complete loading of the application in each process:

lazy

Set lazy mode (load apps in workers instead of master).

This option may have memory usage implications as Copy-on-Write semantics can not be used. When lazy is enabled, only workers will be reloaded by uWSGI’s reload signals; the master will remain alive. As such, uWSGI configuration changes are not picked up on reload by the master.

There's also a lazy-apps option:

lazy-apps

Load apps in each worker instead of the master.

This option may have memory usage implications as Copy-on-Write semantics can not be used. Unlike lazy, this only affects the way applications are loaded, not master’s behavior on reload.

This uwsgi configuration ended up working for me:

[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv =  /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data

# the fix
lazy = true
lazy-apps = true

OTHER TIPS

As an alternative you might dispose the engine. This is how I solved the problem.

Such issues may happen if there is a query during the creation of the app, that is, in the module that creates the app itself. If that states, the engine allocates a pool of connections and then uwsgi forks.

By invoking 'engine.dispose()', the connection pool itself is closed and new connections will come up as soon as someone starts making queries again. So if you do that at the end of the module where you create your app, new connections will be created after the UWSGI fork.

I am running a flask app using gunicorn on Heroku. My application started exhibiting this problem when I added the --preload option to my Procfile. When I removed that option, my application resumed functioning as normal.

Not sure whether to add this as an answer to this question or ask a separate question and put this as an answer there. I was getting this exact same error for reasons that are slightly different from the people who have posted and answered. In my setup, I using gunicorn as a wsgi for a Flask application. In this application, I was offloading some intense database operations off to a celery worker. The error would come from the celery worker.

From reading a lot of the answers here and looking at the psycopg2 as well as sqlalchemy session documentation, it became apparent to me that it is a bad idea to share an SQLAlchemy session between separate processes (the gunicorn worker and the sqlalchemy worker in my case).

What ended up solving this for me was creating a new session in the celery worker function so it used a new session each time it was called and also destroying the session after every web request so flask used a session per request. The overall solution looked like this:

Flask_app.py

@app.teardown_appcontext
def shutdown_session(exception=None):
    session.close()

celery_func.py

@celery_app.task(bind=True, throws=(IntegrityError))
def access_db(self,entity_dict, tablename):
    with Session() as session:
        try:
            session.add(ORM_obj)
            session.commit()
        except IntegrityError as e:
            session.rollback()
            print('primary key violated')
            raise e
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top