Pregunta

I am writing a script language interpreter which is very similar to "C", with the difference that there are fewer data types for the numeric variables, only int (representing an int data type) and real (obviously a real number...).

The interpreter is a sort of "virtual machine" (yes, the scripting language is compiled into a byte-code stream) and now I need to face the decision what C data type to use for the scripting languages numeric data types in the virtual machine. Right now I am planning to use int64_t for the ints and long double for the reals, however I would like to hear your opinion if you consider these two, being pretty "big" will have any performance issues, and if it will have issues if I need to run the interpreter on embedded hardware, which has only 32 bit architecture.

¿Fue útil?

Solución

Well, yes, of course using very large types will have a (massive) impact in the execution cost for your language.

Many embedded platforms don't have floating-point arithmetic in hardware, and those that do often have only float, not even double. It's often the same with integer too, many platforms are 32-bit only still.

You would have to fall back to software emulation of these features, which would make execution quite costly both in terms of speed but also in the amount of code that is needed.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top