In functional programming, it is considered bad practice (at least from my observations) to use state changes. Since computers operate in an imperative-language-like matter (performing one operation at a time, changing states of RAM), isn't functional programming contradictory to the way a computer operates?

Note: I'm not trying to say that a functional language cannot exist, because obviously there are several out there that do exist and work very well.

Further note: Besides the difference in processes, wouldn't a functional language be inherently extremely slow, due to the amount of operations involved?

有帮助吗?

解决方案

All programming languages and programs are generalized abstractions. We simulate those abstractions on a machine that does things its own way. Were we confined to the way a computer works, we would all be using languages like C or Forth, which are "close to the metal" languages. Computers are general-purpose machines; we are free to create whatever paradigm over the processor's instruction set that we see fit. It is this quality of computers that is what makes them so powerful: the ability to adapt them to any problem domain.

In other words, functional abstractions properly belong to the programming language being used, not the underlying machine that executes it.

In the 1980's Xerox and MIT built machines that could execute Lisp, a functional language, directly. We don't do that anymore because we have processors that are fast enough to translate from one language (the one you write programs in) to another (the processor instruction set), and it's better to have a general-purpose machine that can execute any programming language.

其他提示

Programming languages have two purposes. The first is easy to see, "programming languages have to tell a computer what to do." The second is more subtle: "programming languages have to allow a programmer to convey to the computer what they want it to do, efficiently." In theory, one could always program in assembly (or even machine code), and tell a computer exactly what to do. In practice, we choose other languages because they are better tools for expressing what we want the computer to do.

Even the most pure functional programming languages such as LISP and Haskel have a modicum of imperative language in that something tells the computer to evaluate one of the pure functions, and then do something to the state of the machine, such as print the result to the screen. They push this all the way to the outer edge of the language, such that 99.9999999% of programming is done in a pure functional world, but that edge is always there, simply because the program has to run on a computer, and computers currently use imperative logic.

This is where the translation steps of compiling or interpretation come in. We have to turn a program written into some language into something the computer can do, and the only hook we get from the computer is "the computer guarantees that, when your program is run, it will start executing instruction as at address XXXXX." Thus, whether the language is C++, LISP, or even assembly, the code needs to be mapped into something which, when the computer starts executing at address XXXXX, the desired result occurs.

This translation step is where the speed differences of languages come into play, and where optimization plays a big part. For example, everyone can look at the C++ statement int x = 1 + 2; and see that we want to run opcodes along the lines of (stack based pseudobytecode)

load-constant 1
load-constant 2
add
store-as x

And we might even optimize this, because we see that the constants can't change:

load-constant 3
store-as x

Now consider the equivalent lisp phrase (let x (+ 1 2)). If I don't optimize, it looks exactly as inefficient as you think it should:

load-constant '+'  # yes, the + operator is handled as a constant function in LISP
load-constant 1
load-constant 2
call-function 2 # call a 2 argument function, +, on the two arguments, 1 and 2
store-as x

We have to explicitly call a function, which often costs an order of magnitude more than merely doing an addition. It looks like functional languages have to be slow! However, an optimizer can realize that + is a constant function that doesn't change within the scope of the program, so it can "inline" its behavior to get

load-constant 1
load-constant 2
add
store-as x

This looks familiar, we can even continue the optimization to get

load-constant 3
store-as x

So, for our extremely contrived example, functional programming literally runs as fast as imperative coding. So what is the difference between them? Why do we bother with functional programming?

As it turns out, the reason for functional programming is the second purpose of programming languages, to allow a programmer to convey meaning. There are cases where functional programming is actually a more effective communication mechanism between programmer and computer. This especially true in asynchronous situations, such as those that show up in GUIs. In such cases, one is often forced to rely on tricks and workarounds to map such behaviors into imperative phrasings, while the equivalent functional phrasing flows naturally. This isn't always the case (especially in the case of hardcore mathematics), but in an increasing number of cases, it's a more effective tool. This is why many imperative languages are starting to add little fragments of functional programming to help them better convey meaning (see C++'s lambdas).

Now there's a subtlety here. I mentioned that using the "wrong" language means you have to find tricks or workarounds. This is true both for functional and imperative languages. The more workarounds you have to use, the harder it turns out to be to optimize the code. Functional languages often have very high standards for "pure" functions which capture their side effects explicitly. This can often allow the compiler to make sweeping optimizations which were markedly hard to prove if you had written them in imperative form. In theory the optimization can be applied in both cases, but it can be harder for a compiler to see the optimization if it is ill-fit for the language at hand. Likewise, attempting to model highly imperative code in functional languages often call for strange tools like monads which can strain the optimizer as it tries to find the best way to evaluate everything.

As a case study, I worked with a language designed to be used by analysts, not coders. Accordingly, it was unreasonable to expect them to learn a great deal about how to optimize their code (they needed to spend their brainpower analyzing, not coding!). The compiler really needed to do it for them. We actually found that a functional programming language was a better match for them. They would often "call a function" ten thousand times or more, when the result never changed. They could caching it once, but the mentality of caching did not fit well with the flow of their process, so they simply did not do so. Remember, a language needs to work for your real customer, not your ideal customer.

Accordingly, we gave them a functional language with strict purity rules on the functions. This gave us, as the developers making the language, great power to identify values which could be cached, and to invalidate those cached values as they changed. The result was a language which, despite being "functional," thus further from the metal speeds, actually ran 1000x faster because our optimizer was able to see more opportunities for optimization, leveraging the strictness of pure functions.

许可以下: CC-BY-SA归因
scroll top