Question

In What naming guidelines do you follow?, the author says:

Also I prefer to code using hungarian notation from Charles Simonyi.

I've run in to several programmers who still prefer to use Hungarian, mostly of the Petzold/Systems Hungarian flavor. Think dwLength = strlen(lpszName).

I've read Making Wrong Code Look Wrong, and I understand the rationale for Apps Hungarian, where domain-type information is included in the variable names. But I don't understand the value in attatching the compiler type to the name.

Why do programmers still persist on using this style of notation? Is it just inertia? Are there any benefits that outweigh the decreased readability? Do people just learn to ignore the decorators when reading the code, and if so, how do they continue to add value?

EDIT: A lot of answers are explaining the history, or why it is no longer relevant, both of which are covered in the article I cited.

I'd really like to hear from anyone out there who still uses it. Why do you use it? Is it in your standard? Would you use it if it wasn't required? Would you use it on a new project? What do you see as the advantages?

Was it helpful?

Solution

At the moment I still use Hungarian for exactly three reasons, judiciously avoiding it for everything else:

  1. To be consistent with an existing code base when doing maintenance.
  2. For controls, eg. "txtFirstName". We often need to distinguish between (say) "firstName" the value and "firstName" the control. Hungarian provides a convenient way to do this. Of course, I could type "firstNameTextBox", but "txtFirstName" is just as easy to understand and is less characters. Moreover, using Hungarian means that controls of the same type are easy to find, and are often grouped by name in the IDE.
  3. When two variables hold the same value but differ by type. For example, "strValue" for the value actually typed by the user and "intValue" for the same value once it has been parsed as in integer.

I certainly wouldn't want to set up my ideas as best practice, but I follow these rules because experience tells me that it occasional use of Hungarian benefits code maintainability but costs little. That said, I constantly review my own practice, so may well do something different as my ideas develop.


Update:

I've just read an insightful article by Eric Lippert, explaining how Hungarian can help make wrong code look wrong. Well worth reading.

OTHER TIPS

I'm not a huge fan of using hungarian notation but think this way:

  • We can also notice that it's faster to find a string refering to a TextBox in your code: by typing "txt" in your search box.

No imagine the opposite, where every element has its own name. It might be slower for you to find where you want to go, right?

The same goes for ddl when we want to refer to a DropDownList, its easier or not? :)

People won't spend so much time to find where is this element.

The prefix use is not usable for modern languages compilers like C#, but it is usable (readable) for human beings.

Apps Hungarian (tags to denote semantic properties of objects that can't be expressed through the type system) was a reasonable way to deal with some common errors when using the weakly-typed languages of the early 1980s. They serve little purpose in today's strongly-typed languages.

Systems Hungarian (tags to redundantly denote an object's declared type) has never served any purpose except to impose a superficially uniform appearance on a code base. It was created and propagated by non-technical managers, and inexperienced programmers, who misunderstood the intent of Apps Hungarian, and believed that code quality could be enhanced by complex coding guidelines.

Both styles originated within Microsoft. These days, Microsoft's naming conventions categorically say "Do not use Hungarian notation."

If you come up with the right system of prefixes, you could spread the wear and tear of your keys, which would reduce spending on replacement keyboards.


I suppose I could expand on this. I have used SH at my work place for the last, oh, ten years or so (because it's in our Standard). It has never helped solve a problem.

On the other hand, I have used unadorned but well-named variables in my 'home code' for almost equally as long. I have never missed SH.

In both places, I have written protocol code that requires fixed size primitive types. This is the most beneficial use case I can think of for SH. It hasn't helped that I can tell when written with SH, and it hasn't hindered me when written without SH.

So, in conclusion, the only difference I can see is the wear and tear on your keyboard.

I actually started using SH in new code I wrote this month.

My assignment involved rewriting some Perl code in JS so it could be moved to the client side of our web application. In Perl, SH is generally not required because of sigils ($string, @array, %hash).

In JavaScript, I found SH to be invaluable to track the types of data structures. For example,

var oRowData = aoTableData[iRow];

This retrieves an object from an array of objects using an integer index. Adhering to this convention saved me quite some time looking up data types. Plus, you can overload succinct variable names (oRow vs. iRow).

tl;dr: SH can be great when you have complex code in a weakly typed language. But if your IDE can track types, prefer that.

I am also curious to see rationale. We know why they used it in the past: lack of IDE support for type information. But now? Simply put, I think it is a tradition. C++ code always looked like this, so why change things? Besides, when you build on top of previous code that used Hungarian notation, it would look quite strange when you suddenly stop using that...

The Systems Hungarian notation was in fact a bit of a cock-up, a misunderstanding of the term 'type'. The systems developers took it literally as the compiler type (word, byte, string, ...) as opposed to the apps domain type (row index, column index, ...).

But I guess that every developer goes through several phases of style that seem like a great idea at the time (and prefixing type does seem like a good idea to a novice) before falling into the pitfalls (changing type, creating new, meaningful prefixes, etc). So I guess there's an inertia: from developers that don't get better and realise why it's a poor choice, from developers stuck with coding standards that mandate the practice and from people using <windows.h>. It would be too costly for Microsoft to change to get rid of the prefix notation (which is incorrect in many places: WPARAM?).

There is one thing people are missing with Hungarian. Hungarian notation actually works GREAT with autocomplete.

Say you have a variable, and the name is intHeightOfMonster.

Say you forget the name of the variable

It could be heightOfMonster or MonsterHeight or MeasurementMonsterHeight

You want to be able to type a letter and get the autocomplete suggest to you some variable names.

Knowing that the heightOfMonster is an int, you just type i and voila.

Save time.

Licensed under: CC-BY-SA with attribution
scroll top