Question

I'm working on some code for a microprocessor.
It has a few large, critical constants.

#define F_CPU 16000000UL

In this case, this is the CPU frequency. In Hertz.

As it is, it's rather hard to tell if that's 1,600,000, 160,000,000 or 16,000,000 without manually tabbing a cursor across the digits.

If I put commas in the number #define F_CPU 16,000,000UL, it truncates the constant.

I've worked with a few esoteric languages that have a specific digit-separator character, intended to make large numbers more readable (ex 16_000_000, mostly in languages intended for MCUs). Large "magic numbers" are rather common in embedded stuff, as they are needed to describe aspects of how a MCU talks to the real world.

Is there anything like this in C?

Was it helpful?

Solution

One possibility is to write it like that:

#define F_CPU (16 * 1000 * 1000)

alternatively

#define MHz (1000*1000)
#define F_CPU (16 * MHz)

Edit: The MHz(x) others suggested might be nicer

OTHER TIPS

Yes, C does have preprocessor separators: ##

So you can write

#define F_CPU 16##000##000UL

which has exactly the same meaning as 16000000UL. (Unlike other structures like 16*1000*1000 where you need to be careful not to put them in places where the multiplication can cause problems.)

maybe something like that?

#define MHz(x) (1000000 * (x))
...
#define F_CPU MHz(16)

Also, I don't like #defines. Usually it's better to have enums or constants:

static const long MHz = 1000*1000;
static const long F_CPU = 16 * MHz;

You could write the constant as the result of a calculation (16*1000*1000 for your example). Even better, you could define another macro, MHZ(x), and define your constant as MHZ(16), which would make the code a little bit more self-documenting - at the expense of creating name-space collision probability.

// constants.h
#define Hz   1u              // 16 bits
#define kHz  (1000u  *  Hz)  // 16 bits
#define MHz  (1000ul * kHz)  // 32 bits

// somecode.h
#define F_CPU (16ul * MHz)   // 32 bits

Notes:

  • int is 16 bits on a 8 bit MCU.
  • 16 bit literals will get optimized down to 8 bit ones (with 8 bit instructions), whenever possible.
  • Signed integer literals are dangerous, particularly if mixed with bitwise operators, as common in embedded systems. Make everything unsigned by default.
  • Consider using a variable notation or comments that indicate that a constant is 32 bits, since 32 bit variables are very very slow on most 8-bitters.

Another aproach would be using the ## preprocessor operator in a more generic macro

#define NUM_GROUPED_4ARGS(a,b,c,d) (##a##b##c##d)
#define NUM_GROUPED_3ARGS(a,b,c)   (##a##b##c)

#define F_CPU NUM_GROUPED_3ARGS(16,000,000UL)

int num = NUM_GROUPED_4ARGS(-2,123,456,789); //int num = (-2123456789);
int fcpu = F_CPU; //int fcpu = (16000000UL);

This is somehow WYSIWYG but not immune against misuse. E. g. you might wnat the compiler to complain about

int num = NUM_GROUPED_4ARGS(-2,/123,456,789);  //int num = (-2/123456789); 

but it will not.

You can use scientific notation:

#define F_CPU 1.6e+007

Or:

#define K 1000

#define F_CPU (1.6*K*K)

It might help readability to define the constant as:

#define F_CPU_HZ 16000000UL

That way you know what type of data is in it. In our SW we have a few peripherals which require assorted prescalers to be set, so we have a #defines like this:

#define SYS_CLK_MHZ    (48)
#define SYS_CLK_KHZ    (SYS_CLK_MHZ * 1000)
#define SYS_CLK_HZ     (SYS_CLK_KHZ * 1000)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top