Is the ability to recurse a function of the processor or the programming language/compiler or both?

StackOverflow https://stackoverflow.com/questions/1884831

Question

Is the ability to recursively call a function a capability inherent of the processor or the programming language/compiler. Perhaps, both need elements to support recursion?

I've always been under the impression that the ability to recursively call a function is purely implemented in the programming language and how it lays out its runtime stack as far as to when and where to return to. Am I correct in assuming so or do processors have specific logic that allows for recursion?

Was it helpful?

Solution

A processor just jumps where the code tells it to. Recursion is definitely a language function.

It's a compiler function too, insofar as the compiler has to implement the language. But then if it didn't, we'd consider it broken.

Finally, of course the burden of the operation is borne by the processor: It has to push context and variables on a stack frame, jump to a routine's entry point, execute the instructions, return... you know, the stuff a processor does.

OTHER TIPS

All modern processors are capable of recursion because they have function call and return instructions that are stack-based.

Earlier computers had a very difficult time of it. For example, the IBM 360 laid out code so that the return value and local variables of the function were alongside the code. A recursive call would clobber the previous call's values.

Language.

You don't even always need a stack to recurse. See tail recursion for an example of a type of recursion that doesn't need a stack.

As Carl Smotricz said, a CPU just follows the branching instructions that occur in the instruction stream. What we tend to call recursion is nothing more than ordinary function calls, where the function being called happens to be the same as the one currently executing, which, on a high level, has some very interesting effects.

Even on modern processors with stack addressing modes, you still sometimes want to allocate call frames on the heap, as was done on systems like OS/360. For instance, in Scheme, it's possible to capture a continuation (basically a call frame plus a representation of the active local variable bindings) and keep hold of it. If you capture a continuation in a global variable within a function, and then return from that function, the active call frame can't be cleaned up. It will be cleaned up by the garbage collector at some point, once there are no longer any references to it, but in the meantime, if the call frame was allocated on the stack, it needs to be copied somewhere else (the heap). This is sort of the opposite of tail call optimisation, where you can prove that you never need to allocate the stack frame for a recursive call. For a continuation, you need to allocate the call frame and keep it and its associated environment hanging around indefinitely.

You don't need explicit hardware support for a stack and recursive calls, but the processor does need certain capabilities. I think the following is sufficient on a register-based machine:

  • You can write a stored value to the program counter, for instance with a jump instruction. This is needed to return.
  • You can afford to use up a register storing your own stack pointer.
  • Interrupts don't mess things up too badly (although perhaps you can live with having no stack in interrupt context even if they do).

Bootstrap code has to assign a region of memory to use as stack, and away you go.

I guess a few processors don't provide this. Most processors have explicit support for a stack pointer and a link pointer, and instructions that help with the standard calling convention. But you could invent and implement your own calling convention.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top