Question

In a previous question what I thought was a good answer was voted down for the suggested use of macros

#define radian2degree(a) (a * 57.295779513082)
#define degree2radian(a) (a * 0.017453292519)

instead of inline functions. Please excuse the newbie question, but what is so evil about macros in this case?

Was it helpful?

Solution

There's a couple of strictly evil things about macros.

They're text processing, and aren't scoped. If you #define foo 1, then any subsequent use of foo as an identifier will fail. This can lead to odd compilation errors and hard-to-find runtime bugs.

They don't take arguments in the normal sense. You can write a function that will take two int values and return the maximum, because the arguments will be evaluated once and the values used thereafter. You can't write a macro to do that, because it will evaluate at least one argument twice, and fail with something like max(x++, --y).

There's also common pitfalls. It's hard to get multiple statements right in them, and they require a lot of possibly superfluous parentheses.

In your case, you need parentheses:

#define radian2degree(a) (a * 57.295779513082)

needs to be

#define radian2degree(a) ((a) * 57.295779513082)

and you're still stepping on anybody who writes a function radian2degree in some inner scope, confident that that definition will work in its own scope.

OTHER TIPS

Most of the other answers discuss why macros are evil including how your example has a common macro use flaw. Here's Stroustrup's take: http://www.research.att.com/~bs/bs_faq2.html#macro

But your question was asking what macros are still good for. There are some things where macros are better than inline functions, and that's where you're doing things that simply can't be done with inline functions, such as:

  • token pasting
  • dealing with line numbers or such (as for creating error messages in assert())
  • dealing with things that aren't expressions (for example how many implementations of offsetof() use using a type name to create a cast operation)
  • the macro to get a count of array elements (can't do it with a function, as the array name decays to a pointer too easily)
  • creating 'type polymorphic' function-like things in C where templates aren't available

But with a language that has inline functions, the more common uses of macros shouldn't be necessary. I'm even reluctant to use macros when I'm dealing with a C compiler that doesn't support inline functions. And I try not to use them to create type-agnostic functions if at all possible (creating several functions with a type indicator as a part of the name instead).

I've also moved to using enums for named numeric constants instead of #define.

For this specific macro, if I use it as follows:

int x=1;
x = radian2degree(x);
float y=1;
y = radian2degree(y);

there would be no type checking, and x,y will contain different values.

Furthermore, the following code

float x=1, y=2;
float z = radian2degree(x+y);

will not do what you think, since it will translate to

float z = x+y*0.017453292519;

instead of

float z = (x+y)+0.017453292519;

which is the expected result.

These are just a few examples for the misbehavior ans misuse macros might have.

Edit

you can see additional discussions about this here

Macros are relatively often abused and one can easily make mistakes using them as shown by your example. Take the expression radian2degree(1 + 1):

  • with the macro it will expand to 1 + 1 * 57.29... = 58.29...
  • with a function it will be what you want it to be, namely (1 + 1) * 57.29... = ...

More generally, macros are evil because they look like functions so they trick you into using them just like functions but they have subtle rules of their own. In this case, the correct way would be to write it would be (notice the paranthesis around a):

#define radian2degree(a) ((a) * 57.295779513082)

But you should stick to inline functions. See these links from the C++ FAQ Lite for more examples of evil macros and their subtleties:

if possible, always use inline function. These are typesafe and can not be easily redefined.

defines can be redfined undefined, and there is no type checking.

The compiler's preprocessor is a finnicky thing, and therefore a terrible candidate for clever tricks. As others have pointed out, it's easy to for the compiler to misunderstand your intention with the macro, and it's easy for you to misunderstand what the macro will actually do, but most importantly, you can't step into macros in the debugger!

Macros are evil because you may end up passing more than a variable or a scalar to it and this could resolve in an unwanted behavior (define a max macro to determine max between a and b but pass a++ and b++ to the macro and see what happens).

If your function is going to be inlined anyway, there is no performance difference between a function and a macro. However, there are several usability differences between a function and a macro, all of which favor using a function.

If you build the macro correctly, there is no problem. But if you use a function, the compiler will do it correctly for you every time. So using a function makes it harder to write bad code.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top