Question

What algorithm taught you the most about programming or a specific language feature?

We have all had those moments where all of a sudden we know, just know, we have learned an important lesson for the future based on finally understanding an algorithm written by a programmer a couple of steps up the evolutionary ladder. Whose ideas and code had the magic touch on you?

Was it helpful?

Solution

"To iterate is human, to recurse divine" - quoted in 1989 at college.

P.S. Posted by Woodgnome while waiting for invite to join

OTHER TIPS

General algorithms:

  • Quicksort (and it's average complexity analysis), shows that randomizing your input can be a good thing!;
  • balanced trees (AVL trees for example), a neat way to balance search/insertion costs;
  • Dijkstra and Ford-Fulkerson algorithms on graphs (I like the fact that the second one has many applications);
  • the LZ* family of compression algorithms (LZW for example), data compression sounded kind of magic to me until I discovered it (a long time ago :) );
  • the FFT, ubiquitous (re-used in so many other algorithms);
  • the simplex algorithm, ubiquitous as well.

Numerical related:

  • Euclid's algorithm to compute the gcd of two integers: one of the first algorithms, simple and elegant, powerful, has lots of generalizations;
  • fast multiplication of integers (Cooley-Tukey for example);
  • Newton iterations to invert / find a root, a very powerful meta-algorithm.

Number theory-related:

  • AGM-related algorithms (examples): leads to very simple and elegant algorithms to compute pi (and much more!), though the theory is quite profound (Gauss introduced elliptic functions and modular forms from it, so you can say that it gave birth to algebraic geometry...);
  • the number field sieve (for integer factorization): very complicated, but quite a nice theoretical result (this also goes for the AKS algorithm, which proved that PRIMES is in P).

I also enjoyed studying quantum computing (Shor and Deutsch-Josza algorithms for example): this teaches you to think out of the box.

As you can see, I'm a bit biased towards maths-oriented algorithms :)

Floyd-Warshall all-pairs shortest paths algorithm

procedure FloydWarshall ()
   for k := 1 to n
     for i := 1 to n
        for j := 1 to n
          path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

Here's why it's cool: when you first learn about the shortest-path problem in your graph theory course, you probably start with Dijkstra's algorithm that solves single-source shortest path. It's quite complicated at first, but then you get over it, and you fully understood it.

Then the teacher says "Now we want to solve the same problem but for ALL sources". You think to yourself, "Oh god, this is going to be a much harder problem! It's going to be at least N times more complicated than Dijkstra's algorithm!!!".

Then the teacher gives you Floyd-Warshall. And your mind explodes. Then you start to tear up at how beautifully simple the algorithm is. It's just a triply-nested loop. It only uses a simple array for its data structure.


The most eye-opening part for me is the following realization: say you have a solution for problem A. Then you have a bigger "superproblem" B which contains problem A. The solution to problem B may in fact be simpler than the solution to problem A.

Quicksort. It showed me that recursion can be powerful and useful.

This one might sound trivial but it was a revelation for me at the time. I was in my very first programming class(VB6) and the Prof had just taught us about random numbers and he gave the following instructions: "Create a virtual lottery machine. Imagine a glass ball full of 100 ping pong balls marked 0 to 99. Pick them randomly and display their number until they have all been selected, no duplicates."

Everyone else wrote their program like this: Pick a ball, put its number into an "already selected list" and then pick another ball. Check to see if its already selected, if so pick another ball, if not put its number on the "already selected list" etc....

Of course by the end they were making hundreds of comparisons to find the few balls that had not already been picked. It was like throwing the balls back into the jar after selecting them. My revelation was to throw balls away after picking.

I know this sounds mind-numbingly obvious but this was the moment that the "programming switch" got flipped in my head. This was the moment that programming went from trying to learn a strange foreign language to trying to figure out an enjoyable puzzle. And once I made that mental connection between programming and fun there was really no stopping me.

Huffman coding would be mine, I had originally made my own dumb version by minimizing the number of bits to encode text from 8 down to less, but had not thought about variable number of bits depending on frequency. Then I found the huffman coding described in an article in a magazine and it opened up lots of new possibilities.

Bresenham's line drawing algorithm got me interested in realtime graphics rendering. This can be used to render filled polygons, like triangles, for things like 3D model rendering.

Recursive Descent Parsing - I remember being very impressed how such simple code could do something so seemingly complex.

Quicksort in Haskell:

qsort []     = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)

Although I couldn'd write Haskell at the time, I did understand this code and with it recursion and the quicksort algorithm. It just made click and there it was...

The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.

To elaborate- The "fib(10) = fib(9) + fib(8)" approach means that fib(9) will be evaluated to fib(8) + fib(7). So evaluation of fib(8) (and therefor fib7, fib6) will all be evaluated twice.

The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.

I tend to strive for simple, elegant code when I'm programming, but this is the algorithm that helped me realize that this isn't the end-all-be-all for writing good software, and that ultimately the end users don't care how your code looks.

For some reason I like the Schwartzian transform

@sorted = map  { $_->[0] }
          sort { $a->[1] cmp $b->[1] }
          map  { [$_, foo($_)] }
                @unsorted;

Where foo($) represents a compute-intensive expression that takes $ (each item of the list in turn) and produces the corresponding value that is to be compared in its sake.

Minimax taught me that chess programs aren't smart, they can just think more moves ahead than you can.

I don't know if this qualifies as an algorithm, or just a classic hack. In either case, it helped to get me to start thinking outside the box.

Swap 2 integers without using an intermediate variable (in C++)

void InPlaceSwap (int& a, int &b) {
     a ^= b;
     b ^= a;
     a ^= b;
}

Quicksort: Until I got to college, I had never questioned whether brute force Bubble Sort was the most efficient way to sort. It just seemed intuitively obvious. But being exposed to non-obvious solutions like Quicksort taught me to look past the obvious solutions to see if something better is available.

For me it's the weak-heapsort algorithm because it shows (1) how much a wise chosen data structure (and the algorithms working on it) can influence the performance and (2) that fascinating things can be discovered even in old, well-known things. (weak-heapsort is the best variant of all heap sorts, which was proven eight years later.)

This is a slow one :)

I learned lots about both C and computers in general by understanding Duffs Device and XOR swaps

EDIT:

@Jason Z, that's my XOR swap :) cool isn't it.

For some reason Bubble Sort has always stood out to me. Not because it's elegant or good just because it had/has a goofy name I suppose.

I don't have a favourite -- there are so many beautiful ones to pick from -- but one I've always found intriguing is the Bailey–Borwein–Plouffe (BBP) formula, which enables you to calculate an arbitrary digit of pi without knowledge about the preceding digits.

RSA introduced me to the world of modular arithmetic, which can be used to solve a surprising number of interesting problems!

Hasn't taught me much, but the Johnson–Trotter Algorithm never fails to blow my mind.

Binary decision diagrams, though formally not an algorithm but a datastructure, lead to elegant and minimal solutions for various sorts of (boolean) logic problems. They were invented and developped to minimise the gate count in chip-design, and can be viewed as one of the fundaments of the silicon revolution. The resulting algorithms are amazingly simple.

What they taught me:

  • a compact representation of any problem is important; small is beautiful
  • a small set of constraints/reductions applied recursively can be used to accomplish this
  • for problems with symmetries, tranformation to a canonical form should be the first step to consider
  • not every piece of literature is read. Knuth found out about BDD's several years after their invention/introduction. (and spent almost a year investigating them)

For me, the simple swap in Kelly & Pohl's A Book on C to demonstrate call-by-reference flipped me out when I first saw it. I looked at that, and pointers snapped into place. Verbatim. . .

void swap(int *p, int *q)
{
   int temp;

   temp = *p;
   *p = *q;
   *q = temp;
}

The Towers of Hanoi algorithm is one of the most beautiful algorithms. It shows how you can use recursion to solve a problem in a much more elegant fashion than the iterative method.

Alternatively, the recursion algorithm for Fibonacci series and calculating powers of a number demonstrate the reverse situation of recursive algorithm being used for the sake of recursion instead of providing good value.

The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.

The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.

You know that fibonacci has a closed form solution that allows direct computation of the result in a fixed number of steps, right? Namely, (phin - (1 - phi)n) / sqrt(5). It always strikes me as somewhat remarkable that this should yield an integer, but it does.

phi is the golden ratio, of course; (1 + sqrt(5)) / 2.

An algorithm that generates a list of primes by comparing each number to the current list of primes, adding it if it's not found, and returning the list of primes at the end. Mind-bending in several ways, not the least of which being the idea of using the partially-completed output as the primary search criteria.

Storing two pointers in a single word for a doubly linked list tought me the lesson that you can do very bad things in C indeed (with which a conservative GC will have lots of trouble).

The most proud I've been of a solution was writing something very similar to the DisplayTag package. It taught me a lot about code design, maintainability, and reuse. I wrote it well before DisplayTag, and it was sunk into an NDA agreement, so I couldn't open source it, but I can still speak gushingly about that one in job interviews.

Map/Reduce. Two simple concepts that fit together to make a load of data-processing tasks easier to parallelize.

Oh... and it's only the basis of massively-parallel indexing:

http://labs.google.com/papers/mapreduce.html

Not my favorite, but the Miller Rabin Algorithm for testing primality showed me that being right almost all the time, is good enough almost all the time. (i.e. Don't mistrust a probabilistic algorithm just because it has a probability of being wrong.)

@Krishna Kumar

The bitwise solution is even more fun than the recursive solution.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top