Question

Whenever I learn about a high-level language I want to learn about, part of me says, "I should probably learn the lower-level language it's built upon to really master it". For example,

Ruby    => C
Closure => Java
Elixir  => Erlang

My experience with Ruby and C makes me think that I've got it backwards. I learned Ruby first, and I think it was a good introduction to a lot of concepts. It provided some general context that made learning C a lot easier than it would have been otherwise. Granted, Ruby was also the first language I learned in depth, so some of that might be chalked up to familiarizing myself with computing concepts in general, rather than any language-specific experience.

I think it's good to learn about what's going on under the covers, even if that's a layer you're not regularly working in. But is it generally better to take a top-down, or bottom-up approach when learning new languages?

Was it helpful?

Solution

This may seem like a cop-out, but honestly....do both, if you can.

Higher-level languages are very good for teaching high-level concepts; you can accomplish a lot, very quickly, and learn good practices and design patterns. Learning a high-level language can teach you to look at problems in a big-picture way, and break them down into composable parts. I believe that with many high-level languages (declarative languages, functional languages, logic programming), it's easier to focus on the end goal, along with the patterns and decomposition required to reach that goal.

Lower-level languages are great for understanding what's going on at a hardware level, and I think that they are indispensable. The reason is this; high level languages are generally built upon lower-level languages. Learning a low-level language allows you to see past the abstractions and solve problems related to leaks in the abstraction; they are excellent for teaching fundamentals and they are also good at teaching a programmer how to be careful (and what gotchas to look for that a higher-level language might mask, such as memory management).

I would recommend getting your feet wet with a scripting language - something dynamically typed and interpreted - as well as a lower-level, statically typed, compiled language. (Many languages these days can straddle all of those categories.) Both experiences will make you a better programmer, and having the opportunity to do both is worth more than the sum of its parts; you'll be able to decompose large problems into a collection of very small ones, and solve those small problems easily.

OTHER TIPS

Generally!?

It probably depends. The biggest motivator I see for preferring one end of the spectrum over the other, for self directed study, is interest.

Trudging through c and writing console apps will be of little interest to someone who ultimately wants to write web application front ends. Or even web app back ends.

On the other hand, if making low level hardware"do stuff" is what floats your boat, how motivating will it really be to work through your standard PHP web stack tutorials?

Do what interests you first. Learn the lower and higher levels over time to better understand the level you're actually interested in.

I take the opposite view and would suggest a lower-level language like C as a first programming language. Yes, there is a slightly steeper learning curve, but there is a reason for that, and it will help you through any other language your take up from then on.

In a lower-level language, you (not some opaque class implementation of X) are responsible for managing each byte of memory, protecting against reading/writing beyond the end of your allocated space (automatic or dynamic storage), validating all your input/output, locating each needed position withing arrays, managing line-endings, etc..., and a thousand other fundamental aspects of programming that higher-level languages work hard to hide from you.

By learning to code at a fundamental level and developing good habits for input/output validation, file handling, memory management, etc... at a low level, your transition to any language is made easier and shorter because you will understand how the higher-level paradigms are implemented and how each of those fundamental aspects of programming is working behind the scenes.

In the continuum of languages from machine code, to assembly, to C, to your so-called object-oriented languages of C++, Java, and the rest, C occupies a unique position. It provides many of the features of a higher-level language while providing complete hardware-level control.

While others may suggest you may be able to get more done quicker starting with a higher level language, you will never have a better opportunity to learn programming from a fundamental level than by starting at the lower end of the scale. Transitioning to any of the languages from lower-to-higher level will give you much more of an understanding of what the higher-level languages provide, as well as clear insight to the limitations they have, and most-importantly -- why.

There is absolutely no point in learning C first to understand how Ruby works.

Learning how the interpreter works on a machine level may be interesting for mastery-level knowledge, but that doesn't strictly require knowledge of C, and is definitely not useful for a beginner in Ruby.

In the case of Closure and Elixir, learning the underlying environments (JVM and BEAM) is useful because they strongly influence the way the languages work, but again it's not really necessary to learn Java and Erlang for that.

That said, from a general programming skill viewpoint, learning a low-level language like C is useful to gain a better understanding of how computers work, or to have an escape hatch if your high-level language is inadequate for a task. But this is an additional skill to master, not a prerequisite to learning the high-level language.

Licensed under: CC-BY-SA with attribution
scroll top