Question

  1. What are some of the ways a compiler eliminates repeated subexpressions recomputations? How do you keep track of the sub-expressions? And How do you identify the repeated ones?
  2. Besides the usage of bitwise operators, what are some of the strength reduction techniques common compilers use?
Was it helpful?

Solution

  1. I believe many compilers use SSAPRE (Static Single Assignment Partial Redundancy Elimination) to eliminate repeated expressions. This requires the code to be in SSA form, allowing many more optimizations.

  2. I'm not really sure about this one, but look at this list of LLVM passes. LLVM is an optimizing IR for compilers that is often faster than even GCC. There is a small explanation of each pass. If you need more info, look at the LLVM source for these passes. It is written in C++ but is quite clean and understandable.

Edit: By the way, if you're developing a compiler, I highly recommend LLVM, it is very easy to use and generates highly optimized code.

OTHER TIPS

For 1, The name of the optimization you're looking for is common subexpression elimination (CSE). Depending on your representation, this can be fairly easy. Usually, a compiler will have some intermediate representation of a program where operations are broken down as much as possible and linearized. So for example, the expression c = a * b + a * b might be broken down as:

v1 = a * b
v2 = a * b
c = v1 + v2

So you could do CSE at a very low level by looking for operations with the same operator and operands. When you encounter a duplicate (v2 in this case), you replace all instances of it with the original. So we could simplify the code above to be:

v1 = a * b
c = v1 + v1

This generally assumes that you only assign each variable once (single static assignment form), but you can implement something like this without that restriction. This gets more complicated when you try and perform this optimization across branches. As Zifre mentions, look into Partial Redundancy Elimination.

Either way, you get some basic improvement, and all you need to keep track of are basic expressions. You may want to take this a step further and look for arithmetic identities. For instance, a * b is the same as b * a. Also, x * (y + z) = x * y + x * z. This makes your optimization more complicated, and it's not clear that it would give you that much performance improvement. Anecdotally, most of the benefit from a CSE optimization comes from address computations like array accesses, and you won't need complicated identities like the ones above.

For 2, what strength reductions are useful really depends on the architecture you compile for. Usually this just involves transforming multiplications and divisions into shifts, additions, and subtractions.

I would highly recommend two printed references on these subjects:

  1. Advanced Compiler Design & Implementation by Steven S. Muchnick
  2. Building an Optimizing Compiler by Robert Morgan

The Muchnick book is on the formal side but is very readable and has good descriptions of all of the important optimization techniques. The Morgan book has a much more hands-on feel and would be a great basis for a compiler project focused on optimization techniques. Neither book has much to say about lexical analysis or parsing, knowledge of these subjects is assumed.

To add one more book to the list of recommendations, check out "Hacker's Delight" by Henry S. Warren. It's a great compendium of techniques for optimizing common operations, like transforming integer divisions into multiplications.

You're looking for partial-redundancy elimination (PRE). Both CSE (from the other answers) and loop-invariant code motion are subsumed by PRE. (A variation of PRE is Lazy Code Motion, which I believe is optimal).

Check out Keith Cooper's lecture notes, which seem to describe the techniques very well.

Do NOT use SSAPRE. AFAIK, this requires a particular form of SSA known as HSSA, which has a few downsides:

  • Its pretty complicated
  • It requires global value numbering (and so SSAPRE doesn't provide value numbering, as its expected to exist already).
  • It doesn't provide anything if your language doesn't support pointers to stack variables (and if it does, stop writing your own analysis and use LLVM or gcc).
  • gcc used HSSA for a while, but they have moved away from it.
  • LLVM experimented with it, but AFAIK they don't use it anymore.

EDIT:

Muchnick's book has a detailed description, its linked in another answer.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top