Question

I had a discussion with the team lead who told me that using uintX_t is very problematic and causes performance problems...I can't understand why.... using uint8_t and uint16_t is the same as unsigned char and unsigned short - I don't think that these types use causes performance problems... Exactly as uint64_t is like long...... May be the performance problems can occur with uint128_t etc.. Is it correct or I am missing something...

Upd It is known that unsigned char and unsigned short sizes should not be 8 and 16 in all platforms....Just used classic values.....

No correct solution

OTHER TIPS

Guarantee the size is the main purpose.

Those type aren't made for performance issues. They're made to guarantee the integer sizes are the same over various systems.

For example, when you use int32_t you can be sure the size of it is 32 bits anywhere code compiles, but you're not sure about the size of int.

The problem is using these guaranteed size types may affect the performance, int_fastX_t types can reduce this bad effect because they guarantee minimum size.

For example, compiler can use a 32 bit int for a int_fast16_t in a 32 bit machine...

It's a "how long is a piece of string"-type question.

Whenever anyone makes claims of this nature that you care about, ask to see the code they've used and the results of their benchmarks. Then you can judge whether the benchmarks apply to your case, and perhaps run them yourself.

In other words, the claim is not worth very much without benchmarks that reflect your environment and your actual use of those types. It could be that your team lead has done thorough profiling of the code base in question; it could also be that he simply "thinks" that uintX_t would be slower. We have no way of knowing which it is.

You may have a performance issue if you are using a uint8_t on a 32 bits processors and you don't need the automatic overflow if the number is bigger than 255.

Why : The compiler may need to apply a mask on the value before processing it / using it.

For example the 8 bits value is stored in a 32 bits register, you need to do a comparison with it, and if your processor does not have an instruction to do the comparison only using the 8 lower bits of the register, so the compiler must apply a mask 0x0000FFFF before doing the comparison

That why you do have the types : int_least8_t or uint_fast8_t... Take a look at this page stdint.h to see all available types

They are types, and attempt to make certain types cross-platform by guaranteeing their size. There is nothing about them, in particular, that would cause a performance issue. The problem may be how they are used (which would be no different for unsigned int, unsigned short, and unsigned char).

In short, your team lead is wrong, and likely is a direct consequence of the Peter Principle.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top