Question

I was wondering why we use the terms "push" and "pop" for adding/removing items from stacks? Is there some physical metaphor that caused those terms to be common?

The only suggestion I have is something like a spring-loaded magazine for a handgun, where rounds are "pushed" into it and can be "popped" out, but that seems a little unlikely.

A second stack trivia question: Why do most CPUs implement the call stack as growing downwards in memory, rather than upwards?

Was it helpful?

Solution

I believe the spring loaded stack of plates is correct, as the source for the term PUSH and POP.

In particular, the East Campus Commons Cafeteria at MIT had spring loaded stacks of plates in the 1957-1967 time frame. The terms PUSH and POP would have been in use by the Tech Model Railroad Club. I think this is the origin.

The Tech Model Railroad Club definitely influenced the design of the Digital Equipment Corporation's (DEC) PDP-6. The PDP-6 was one of the first machines to have stack oriented instructions in the hardware. The instructions were PUSH, POP, PUSHJ, POPJ.

http://ed-thelen.org/comp-hist/pdp-6.html#Special%20Features

OTHER TIPS

For your second question, Wikipedia has an article about the CS philosophy that controls the stack:

http://en.wikipedia.org/wiki/LIFO

And for the first, also on wikipedia:

A frequently used metaphor is the idea of a stack of plates in a spring loaded cafeteria stack. In such a stack, only the top plate is visible and accessible to the user, all other plates remain hidden. As new plates are added, each new plate becomes the top of the stack, hiding each plate below, pushing the stack of plates down. As the top plate is removed from the stack, they can be used, the plates pop back up, and the second plate becomes the top of the stack. Two important principles are illustrated by this metaphor: the Last In First Out principle is one; the second is that the contents of the stack are hidden. Only the top plate is visible, so to see what is on the third plate, the first and second plates will have to be removed. This can also be written as FILO-First In Last Out, i.e. the record inserted first will be popped out at last.

For the second question: Assembler programmers on small systems tend to write code that begins at low addresses in memory, and grow to higher addresses as more code is added.

Because of this, making a stack grow downward allows you to start the stack at the top of physical memory and allow the two memory zones to grow towards each other. This simplifies memory management in this sort of trivial environment.

Even in a system with segregated ROM/RAM fixed data allocations are easiest to build from the bottom up and thus replace the code portion of the above explanation.

While such trivial memory schemes are very rare anymore, the hardware practice continues as established.

Think of it like a pez dispenser. You can push a new one on top. And then pop it off the top.

That is always what I think of when I think push and pop. (Probably not very historical though)

Are you asking yourself what the heck are PEZ? See the comments.

Alliteration is always attractive (see what I did there?), and these words are short, alliterative, and suggestive. The same goes for the old BASIC commands peek and poke, which have the extra advantage of the parallel k's.

A common physical metaphor is a cafeteria plate dispenser, where a spring-loaded stack of plates makes it so that you can take a plate off the top, but the next plate rises to be in the same position.

Re your "second trivial question": I've seen considerable inconsistency in defining what "up" and "down" mean! From early days, some manufacturers and authors drew memory diagrams with low addresses at the top of the page (presumably mimicking the order in which a page is read), while others put high addresses at the top of the page (presumably mimicking graph paper coordinates or the floors in a building).

Of course the concept of a stack (and the concept of addressable memory as well) is independent of such visual metaphors. One can implement a stack which "grows" in either direction. In fact, I've often seen the trick below (in bare-metal level implementations) used to share a region of memory between two stacks:

+---+---+--------   -------+--+--+--+
|   |   |   ->   ...   <-  |  |  |  |
+---+---+--------   -------+--+--+--+
^                                   ^
Stack 1      both stacks      Stack 2
base        "grow" toward        base
              the middle

So my answer is that stacks conceptually never grow either "downward" or "upward" but simply grow from their base. An individual stack may be implemented in either direction (or in neither direction, if it's using a linked representation with garbage collection, in which case the elements may be anywhere in nodespace).

The answers on this page pretty much answer the stack direction question. If I had to sum it up, I would say it is done downwards to remain consistent with ancient computers.

I think the original story came about because of some developers seeing the plate stack (like you often see in buffet restaurants). You pushed a new plate on to the top of the stack, and you popped one off the top as well.

As to stacks growing downwards in memory, remember that When dealing with hierarchical data structures (trees), most programmers are happy to draw one on a page with the base (or trunk) at the top of the page...

I know this thread is really old, but I have a thought about the second question:

In my mind, the stack grows up, even though the memory addresses decrease. If you were to write a whole bunch of numbers on a piece of paper, you would start at the top left, with 0. Then you would increase the numbers going left to right, then top to bottom. So say the stack is like this:

000 001 002 003 004     000 001 002 003 004     000 001 002 003 004
005 006 007 008 009     005 006 007 008 009     005 006 007 008 009
010 011 012 013 014     010 011 012 013 014     010 011 012 013 014
015 016 017 018 019     015 016 017 018 019     015 016 017 018 019
020 021 022 023 024     020 021 022 023 024     020 021 022 023 024
025 026 027 028 029     025 026 027 028 029     025 026 027 028 029

where the bold numbers represent stack memory, and unbold numbers represent memory addresses which the stack is not using. Each block of the same numbers represents a stage of a program where the call stack has grown.

Even though the memory addresses are moving downward, the stack is growing upwards.

Similarly, with the spring loaded plate stack,
if you took a plate off the top of the stack, you would call that the first plate (smallest number) right? Even thought it is the highest up. A programmer might even call it the zeroth plate.

For the question of why do stacks grow down, I would imagine it is used to save memory.

If you start from the top of the stack memory (the highest value addresses) and work down to zero, I assume its easier to check if you have reached address $0x00000000 than allocating a variable to give you the maximum height of the stack and checking whether or not you have reached that address.

I assume this makes it easier to check if you're reaching the end of your addressable space as no matter how much memory is available the limit of the stack is always going to be $0x00000000.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top