Question

In Eric Lippert's article What's Up With Hungarian Notation?, he states that the purpose of Hungarian Notation (the good kind) is to

extend the concept of "type" to encompass semantic information in addition to storage representation information.

A simple example would be prefixing a variable that represents an X-coordinate with "x" and a variable that represents a Y-coordinate with "y", regardless of whether those variables are integers or floats or whatever, so that when you accidentally write xFoo + yBar, the code clearly looks wrong.

But I've also been reading about Haskell's type system, and it seems that in Haskell, one can accomplish the same thing (i.e. "extend the concept of type to encompass semantic information") using actual types that the compiler will check for you. So in the example above, xFoo + yBar in Haskell would actually fail to compile if you designed your program correctly, since they would be declared as incompatible types. In other words, it seems like Haskell's type system effectively supports compile-time checking equivalent to Hungarian Notation

So, is Hungarian Notation just a band-aid for programming languages whose type systems cannot encode semantic information? Or does Hungarian Notation offer something beyond what a static type system such as Haskell's can offer?

(Of course, I'm using Haskell as an example. I'm sure there are other languages with similarly expressive (rich? strong?) type systems, though I haven't come across any.)


To be clear, I'm not talking about annotating variable names with the data type, but rather with information about the meaning of the variable in the context of the program. For example, a variable may be an integer or float or double or long or whatever, but maybe the variable's meaning is that it's a relative x-coordinate measured in inches. This is the kind of information I'm talking about encoding via Hungarian Notation (and via Haskell types).

No correct solution

Licensed under: CC-BY-SA with attribution
scroll top