Question

I'm writing a web UI for data analysis tasks.

Here's the way it's supposed to work:

After a user specifies parameters like dataset and learning rate, I create a new task record, then a executor for this task is started asyncly (The executor may take a long time to run.), and the user is redirected to some other page.

After searching for an async library for python, I started with eventlet, here's what I wrote in a flask view function:

db.save(task)
eventlet.spawn(executor, task)
return redirect("/show_tasks")

With the code above, the executor didn't execute at all.

What may be the problem of my code? Or maybe I should try something else?

Was it helpful?

Solution 2

You'll need to patch some system libraries in order to make eventlet work. Here is a minimal working example (also as gist):

#!/usr/bin/env python 

from flask import Flask 
import time 
import eventlet 

eventlet.monkey_patch() 

app = Flask(__name__) 
app.debug = True 

def background(): 
    """ do something in the background """ 
    print('[background] working in the background...') 
    time.sleep(2) 
    print('[background] done.') 
    return 42 

def callback(gt, *args, **kwargs): 
    """ this function is called when results are available """ 
    result = gt.wait() 
    print("[cb] %s" % result) 

@app.route('/') 
def index(): 
    greenth = eventlet.spawn(background) 
    greenth.link(callback) 
    return "Hello World" 

if __name__ == '__main__': 
    app.run() 

More on that:

One of the challenges of writing a library like Eventlet is that the built-in networking libraries don’t natively support the sort of cooperative yielding that we need.

OTHER TIPS

While you been given with direct solutions, i will try to answer your first question and explain why your code does not work as expected.

Disclosures: i currently maintain Eventlet. This comment will contain a number of simplifications to fit into reasonable size.

Brief introduction to cooperative multithreading

There are two ways to do Multithreading and Eventlet exploits cooperative approach. At the core is Greenlet library which basically allows you to create independent "execution contexts". One could think of such context as frozen state of all local variables and a pointer to next instruction. Basically, multithreading = contexts + scheduler. Greenlet provides contexts so we need a scheduler, something that makes decisions about which context should occupy CPU right now. It turns, to make decisions we should also run some code. Which means a separate context (green thread). This special green thread is called a Hub in Eventlet code base. Scheduler maintains an ordered set of contexts that need to be run ASAP - run queue and set of contexts that are waiting for something (e.g. network IO or time limited sleep) to finish.

But since we are doing cooperative multitasking, one context will execute indefinitely unless it explicitly yields to another. This would be very sad style of programming, and also by definition incompatible with existing libraries (pointing at they-know-who); so what Eventlet does is it provides green versions of common modules, changed in such way that they switch to Hub instead of blocking everything. Then, some time may be spent in other green threads or in Hub's wait-for-external-events implementation, in which case Hub would switch back to green thread originating that event - and it would continue execution.

End. Now back to your problem.


What eventlet.spawn actually does: it creates a new execution context. Basically, allocates an object in memory. Also it tells scheduler to put this context into run queue, so at first possible moment, Hub will switch to newly spawned function. Your code does not provide such a moment. There is no place where you explicitly give up execution to other green threads, for Eventlet this is usually done via eventlet.sleep(). And since you don't use green versions of common modules, there is no chance to yield implicitly when other code waits. Most appropriate (if not the only one) place would be your WSGI server's accept loop: it should give other green threads chance to run while waiting for next request. Mentioned in first answer eventlet.monkey_patch() is just a convenient way to replace all (or subset of) common modules with their corresponding green versions.


Unwanted opinion on overall design In separate section, to skip easily. Iff you are building error resistant software, you usually want to limit execution time for spawned threads (including but not limited to "green") and processes and at least report(log) or react to their unhandled errors. In provided code, your spawned green thread, technically may run in next moment or five minutes later (again, because nobody yields CPU) or fail with unhandled exception. Luckily, Eventlet provides two solutions for both problems: Timeout with_timeout() allow to limit waiting time (remember, if it does not yield, you can't possibly limit it) and GreenThread.link() to catch all exceptions. It may be tempting (it was for me) to reraise exceptions in "main" code, and link() allows that easily, but consider that exceptions would be raised from sleep and IO calls - places where you yield to Hub. This may provide some really counter intuitive tracebacks.

Eventlet may indeed be suitable for your purposes, but it doesn't just fit in with any old application; Eventlet requires that it be in control of all your application's I/O.

You may be able to get away with either

  1. Starting Eventlet's main loop in another thread, or even

  2. Not using Eventlet and just spawning your task in another thread.

Celery may be another option.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top