Question

I was studying uC/OS and read this article:

Because different microprocessors have different word length, the port of μC/OS-II includes a series of type definitions that ensures portability Specifically, μC/OS-II’s code never makes use of C’s short, int and, long data types because they are inherently non-portable. Instead, I defined integer data types that are both portable and intuitive as shown in listing 1.1. Also, for convenience, I have included floating-point data types even though μC/OS-II doesn’t make use of floating-point. The following is listing 1.1

typedef unsigned char BOOLEAN;
typedef unsigned char INT8U;
typedef signed char INT8S;
typedef unsigned int INT16U;
typedef signed int INT16S;
typedef unsigned long INT32U;
typedef signed long INT32S;
typedef float FP32;
typedef double FP64;
#define BYTE INT8S
#define UBYTE INT8U
#define WORD INT16S
#define UWORD INT16U
#define LONG INT32S
#define ULONG INT32U

My questions is :

1- What does the writer mean by word length(the first bold words in my question body) ?!

2- Why short int and long data types are inherently non-portable.

3- Is typedef is a microprocessor directive, and if it is what is its function ?!

4- Can I write typedef unsigned char (anything) instead of typedef unsigned char INT8U;

5- Why did the author code typedef unsigned char INT8U; and then #define UBYTE INT8U can't I use this directly typedef unsigned char UBYTE;

6- There is a double use of typedef unsigned char one of them is typedef unsigned char INT8U; and the other typedef unsigned char BOOLEAN; Why did he do that?!

Was it helpful?

Solution

1- What does the writer mean by word length

A word is a fundamental unit of memory like a page -- actually, there's an article on word too, which I won't regurgitate. The significance to C is, like your author says, that it is not always the same but it is determined by hardware characteristics. This maybe one reason the C standard doesn't dictate the literal size of basic types; the most obvious one to contemplate is the size of pointers, which will be 4 bytes on 32-bit systems and 8 on 64-bit systems, to reflect the size of the address space.

2- Why short int and long data types are inherently non-portable.

More accurately: they're as portable as C but their size is not standardized, which may make them useless for many applications where a fixed specific size is required.

3- Is typedef is a microprocessor directive, and if it is what is its function ?!

No, it's not a processor directive. It's a nice piece of syntactic sugar which enables you to define custom types.

4- Can I write typedef unsigned char (anything) instead of typedef unsigned char INT8U;

Yep, that's the idea. Beware that the C standard doesn't even dictate the size of a char, although I've never heard of an implementation where it is anything but 8-bits [but someone in the comments has].

5- Why did the author code typedef unsigned char INT8U; and then #define UBYTE INT8U can't I use this directly typedef unsigned char UBYTE;

You could, yes. Possibly the author wanted to restrict the number of places such a type is defined. Since using the #define is a pre-processor directive, it might also also slightly streamline the executable (although not to a degree that could be considered generally significant).

6- There is a double use of typedef unsigned char one of them is typedef unsigned char INT8U; and the other typedef unsigned char BOOLEAN; Why did he do that?!

Again, use of typedefs is a lot about "sugar"; they can make your code cleaner, easier to read, and (presuming they are done properly), more robust. "Boolean" is a math derived CS term for a type which only has two meaningful values, zero (false) or not zero (true). So it could in theory be implemented with just one bit, but that's neither easy nor in the end efficient (because there are no processors with 1-bit registers, they would have to slice dice and fake such anyway). Defining a "bool" or "boolean" type is common in C and used to indicate that the significance of the value is either true or false -- it works well with, e.g. if (var) (true) and if (!var) (false) since C already evaluates that way (0 and NULL are the only values that will pass if (!var)). Whereas using something like INT8U indicates you are dealing with a value that ranges from decimal 0 to 255, since it unsigned. I think putting the U upfront is a more common practice (UINT8), but if you are used to the concepts it is reasonably clear. And of course the typedef/define is not hard to check.


About stdint.h

Integer types are the ones with the greatest range of variation, and in fact the ISO C standard does require that an implementation include definitions for various integer types with certain minimum sizes in stdint.h. These have names like int_least8_t. Of course, types with a real fixed size (not just a minimum) are needed for many things, and most common implementations do provide them. The C99 standard dictates that if they are available, they should be accessible via names following the pattern intN_t (signed) and uintN_t (unsigned), where N is the number of bits. The signed types are also specified as two's complement, so one can work with such values in all kinds of highly portable ways.

As a final note, while I'm not familiar with MicroC, I would not take that documentation as representative of C generally -- it is intended for use in a somewhat restrictive and specialized environment (a 16-bit int, implied by the typedefs, is unusual, and so if you ran that code elsewhere, INT16U could be 32-bits, etc). I'd guess MicroC only conforms to ANSI C, which is the oldest and most minimal standard; evidently it has no stdint.h.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top