Question

I've been thinking of some concepts underlying a new language. It was kind of a toy at first, but now I'm wondering whether it could really mean something. I'm posting this question to Stack Overflow to see if it's been done before, and if I can get any feedback, ideas, or other information.

I began thinking about this mainly after reading Jonathan Edward's presentation on declarative programming. I then mixed it with some of my older ideas and what I've seen in modern languages.

The main idea behind declarative programming is "what" vs. "how." However, I've heard that so many times, so it seems to almost always be like the word "interesting," where it doesn't actually tell you anything, which is frustrating.

In Jonathan Edward's version though, he first began by emphasizing lazy evaluation. This has some interesting consequences, namely functional reactive programming (FRP). Here is an example of FRP with animation (using a syntax I made up):

x as time * 2 // time is some value representing the current time
y as x + (2 * 500)

new Point(x, y)

So here the values just automatically change if the inputs change. In one of my favorite languages, D, there was a distinction between "pure" and "impure" functions. A pure function is a function that had no connection to the outside world and only used other pure functions. Otherwise it'd be impure. The point was that you could always trust a pure function to return the same value for given arguments.

I suppose a similar transitive principle applies here. Our impurity is time. Everything touched by time, being x, thus y, and thus new Point(x, y) are impure. However, notice (2 * 500) is pure. So you see this tells the compiler where its limits are. I think of it like simplifying a math expression with variables:

(x ^ 2) + 3x + 5
(4 ^ 2) + 3x + 5 = 16 + 3x + 5 = 21 + 3x = 3(7 + x)

By telling the compiler what's pure and what's not, we can simplify our programs a lot. Another point is eager or mutable data. Jonathan Edward's recognized the input as being mutable and eager, but the output as functional and lazy. Basically, given new input, the program defined an atomic state change, and then the output would simply be a function of the current state. If you want to see why this can be important, see the presentation. Input is impure. Lazy evaluation helps define the atomic state change. Let's take a look at how a program would be written procedurally:

void main ()
{
    String input = "";

    writeln("Hello, world!");
    writeln("What's your name? ");

    input = readln();

    writeln("Hello, %s!", input);
    writeln("What's your friends name? ");

    input = readln();

    writeln("Hello to you too, %s!", input);
}

Here the bind keyword says that the following code is executed if begin changes. The mutable keyword says that input is not lazy, but eager. Now let's look at how an "atomic state change" might represent it.

program:
    mutable step := 0

    bind begin:
        writeln("Hello, world!")
        writeln("What's your name? ")
        ++step

    bind readln() as input when step = 1:
        writeln("Hello, %s!", input)
        writeln("What's your friends name? ")
        ++step

    bind readln() as input when step = 2:
        writeln("Hello to you too, %s!", input)

Now here we see something that could be made easier, and more readable, for the programmer. First of all is the ugly step variable and how we must increment and test it each time. Here's an example of what a new and improved version might look like:

program:
    bind begin:
        writeln("Hello, world!")
        writeln("What's your name? ")

    bind readln() as input:
        writeln("Hello, %s!", input)
        writeln("What's your friends name? ")
        yield // This just means the program jumps to here instead of at the beginning
        writeln("Hello to you too, %s!", input)
        halt

That's better. Not perfect, though. But if I knew the perfect answer, I wouldn't be here, right?

Here's a better example, using a game engine:

class VideoManager:
    bind begin: // Basically a static constructor, will only be called once and at the beginning
        // Some video set up stuff

    bind end: // Basically a static destructor
        // Some video shut down stuff

class Input:
    quitEvent     as handle // A handle is an empty value, but can be updated so code that's bound to it changes.
    keyboardEvent as handle(KeyboardEvent) // This handle does return a value though
    mouseEvent    as handle(MouseEvent)

    // Some other code manages actually updating the handles.

class Sprite:
    mutable x := 0
    mutable y := 0

    bind this.videoManager.updateFrame:
        // Draw this sprite

class FieldState:
    input  as new Input
    player as new Sprite

    bind input.quitEvent:
        halt

    bind input.keyboardEvent as e:
        if e.type = LEFT:
            this.player.x -= 2
        else if e.type = RIGHT:
            this.player.x += 2
        else if e.type = UP:
            this.player.y -= 2
        else if e.type = DOWN:
            this.player.y += 2

I like that this doesn't require callbacks, events, or even loops or anything, and threads are obvious. It's easier to tell what's going on, and it's not just the Python-like syntax. I think it's the kind of thing like when language developers realized that there were only a few things people were using labels and goto's for: conditional branches and loops. So they built if-then-else, while, and for into languages, labels and goto's became deprecated, and the compilers as well as the people could tell what was going on. Most of what we use comes from that process.

Going back to threads, the nice thing about this is that threads are way more flexible. If the compiler is free to do what it wants because we've come closer to saying what we want, not how we want it done. So, the compiler can take advantage of multi-core and distributed processors, yet still compensate for platforms without good threading support.

There's one last thing I'd like to mention. And that is my view on templates. This was kind of a conceptual egg that began developing as I started programming (about 2 years ago, actually), then later began to crack open. Basically it was the principle of abstraction, but it stretched farther than classes and objects.

It had to do with how I perceived a function. For example:

int add (int a, int b)
{
    return a + b;
}

Okay, add returned an int, but what was it? It kind of felt like an int waiting to happen. Like a puzzle without a few pieces. There were limited possibilities, and only certain pieces fit, but when you were done you had a finished product that you could then use somewhere else. This is, like I said, the principle of abstraction. Here are some examples of what I think are abstraction + missing pieces -> concrete relationships:

  • function + arguments -> value
  • abstract class + methods -> class
  • class + instance values -> object
  • template + arguments -> function or class
  • program + input + state -> output

They're all closely related. It seems this could be taken advantage of. But how? Again, that's why this is a question. But lazy evaluation is interesting here, since you can pass something with its pieces still missing to something else. To the compiler it's mostly a matter of dereferencing names down to the impurities. Like my example from above:

(x ^ 2) + 3x + 5
(4 ^ 2) + 3x + 5 = 16 + 3x + 5 = 21 + 3x = 3(7 + x)

The more pieces you give the compiler, the more it can finish it and reduce the program to its essential core. And the add function above would be resolved at compile time, automatically, since it relies on no outside resources. Even lots of classes and objects could be resolved, and huge portions of programs, depending on how smart the compiler is.

That's all for now. If you've seen examples of these things already done, I'd like to see. And if you have any ideas, innovation, resources, or feedback, I'd appreciate that too.

Was it helpful?

Solution

You'd definitely want to take a look at the Haskell programming language.

Haskell is extremely declarative, lazy-evaluation is built-in and even functional reactive programming libraries exist. But most notably, Haskell is purely functional, i.e. everything, really everything, is pure.

So the question is how does Haskell deal with the necessary impurities that arise through any IO.

The answer fits quite well to the thoughts you presented. Haskell uses a mathematical construct called monads that basically represent a computation producing some value along with a function bind (>>= as an infix operator), that sequences such computations.

So let's take some IO example: Read a line and output your name ... Even IO is pure, so you cannot simply run something. Instead, you build up a greater IO computation

do
    putStr "Enter your name: "
    name <- getLine
    putStrLn ("Hello " ++ name)

Looks quite imperative, but under the hood, it's just syntax for

(putStr "Enter your name: ") >>
(getLine >>= \name ->
 putStrLn ("Hello " ++ name))

Now you can define this bind/>>= for arbitrary kinds of computations in any way you like. So in fact everything you talked about can be implemented in this way - even FRP.

Just try looking for monads or Haskell here on Stackoverflow; there have been many questions to this topic. And after all, it's still everything type-checked and thus correctness can be enforced by the compiler.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top