Question

Similar to this question, I'd like to limit the execution time of a function--preferably with microsecond accuracy--in C. I imagine that C++ exceptions could be used to achieve a result similar to this Python solution. Though it's not ideal, such an approach is wholly unavailable in plain C.

I wonder, then, how might I interrupt the execution of a function after a certain time interval in C on a Posix system? For relatively simple situations a bit of silly business works just fine, but that adds a fair amount of code orthogonal to the problem solution. Let's say I have a function like so:

void boil(egg *e) {
    while (true)
    do_boil(e);
}

I want to run run boil on an egg*, interrupting it every 50μs to check do something like so:

egg *e = init_egg();
while (true) {
    preempt_in(50, (void) (*boil), 1, e);
    /* Now boil(e) is executed for 50μs, 
       then control flow is returned to the
       statement prior to the call to preempt_in.
     */
    if (e->cooked_val > 100)
        break;
}

I realize that pthreads could be used to do this, but I'm rather more interested in avoiding their use. I could switch between ucontext_t's in a SIGALRM handler, but the POSIX standard notes that the use of setcontext/swapcontext is not to be used in a signal handler and, indeed, I note differing behaviors between Linux and Solaris systems when doing so.

Is this effect possible to achieve? If so, in a portable manner?

Was it helpful?

Solution

Just to note that the general capability you're looking for here is called cost enforcement. For example see this article by Wellings or in Leung, et. al.'s great book. The answers above are focused on achieving it in userspace; some RTOS or languages support it (not linux) as a general mechanism.

One example OS that provides this is the AUTOSAR OS (spec linked). Note that this OS provides execution time enforcement, which is slightly different from deadline enforcement. Execution time enforcement keeps getting harder, as it relies on some ability to measure the actual cost expended (usually with hardware cooperation). With the complexity of today's processors, it's difficult and costly to measure this — not to mention that the meaning of the measurements (due to nonlinear execution and all sorts of other cool stuff) are hard to interpret — and thus difficult event to compute a (worst- or common-) case estimate of a particular section of code's execution time.

Slightly off-topic, but Ada provides a more rigorous set of capabilities at the language level here, which doesn't help you, but you could figure out how these Ada requirements have been implemented in Linux. The Ada language spec is unique in providing a rationale document, see the section on real-time preemptive abort as a departure point.

OTHER TIPS

You can either use threads, or have the function poll a timer (or a global variable set by SIGALRM handler), then save its state and exit when the allotted time has expired. Use of ucontext_t is deprecated and should never be used in new code, at all, much less from signal handlers.

The solution I desire suddenly occurs to me: goto! I'll setup a jump point just after the function I wish to limit, set a timer and in the signal handler which deals with the SIG*ALRM simply jump to the instruction after the function.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top