This is not a problem of representation, language or standard library but of algorithm. If you have a code generator then...why don't you change the generated code to be the best (= shortest with required precision) representation? It's what you do when you write code by hand.
In the hypothetical put_constant(double value)
routine you may check what's the value you have to write:
- Is it an integer? Don't bloat the code with
std::fixed
andset_precision
, just cast to integer and add a dot. - Try to convert it to string with default settings then convert it back to
double
, if nothing changed then default (short) representation is good enough. - Convert it to string with your actual implementation, and check its length. If it's more than N (see later) use another representation otherwise just write it.
A possible (short) representation for floating point numbers when they have a lot of digits is to use their memory representation. With this you have a pretty fixed overhead and length won't ever change so you should apply it only for very long numbers. A naive example to show how it may work:
#define USE_L2D __int64 ___tmp = 0;
#define L2D(x) (double&)(___tmp=x)
int main(int argc, char* argv[])
{
// 2.2 = in memory it is 0x400199999999999A
USE_L2D
double f1 = L2D(0x400199999999999A);
double f2 = 123456.1234567891234567;
return 0;
}