Question

I recently ran across an idea put forth by Jaron Lanier called "phenotropic programming."

The idea is to use 'surface' interfaces instead of single point interfaces in computer programs utilizing statistics to winnow out minor errors that would typically cause a "classical" program to catastrophically crash.

The two-line description is here:

According to Jaron, the 'real difference between the current idea of software, which is protocol adherence, and the idea [he is] discussing, pattern recognition, has to do with the kinds of errors we're creating' and if 'we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become.'

The slightly longer explanation is here. And the even longer explanation is here.

So, the question, looking past the obvious robot-overlord connotations that people tend to pick out, how would one actually design and write a "phenotropic program?"

Was it helpful?

Solution

Lanier has invented a 50 cent word in an attempt to cast a net around a specific set of ideas that describe a computational model for creating computer programs having certain identifiable characteristics.

The word means:

A mechanism for component interaction that uses pattern recognition or artificial cognition in place of function invocation or message passing.

The idea comes largely from biology. Your eye interfaces with the world, not via a function like See(byte[] coneData), but through a surface called the retina. It's not a trivial distinction; a computer must scan all of the bytes in coneData one by one, whereas your brain processes all of those inputs simultaneously.

enter image description here

Lanier claims that the latter interface is more fault tolerant, which it is (a single slipped bit in coneData can break the whole system). He claims that it enables pattern matching and a host of other capabilities that are normally difficult for computers, which it does.

The quintessential "phenotropic" mechanism in a computer system would be the Artificial Neural Network (ANN). It takes a "surface" as input, rather than a defined Interface. There are other techniques for achieving some measure of pattern recognition, but the neural network is the one most closely aligned with biology. Making an ANN is easy; getting it to perform the task that you want it to perform reliably is difficult, for a number of reasons:

  1. What do the input and output "surfaces" look like? Are they stable, or do they vary in size over time?
  2. How do you get the network structure right?
  3. How do you train the network?
  4. How do you get adequate performance characteristics?

If you are willing to part with biology, you can dispense with the biological model (which attempts to simulate the operation of actual biological neurons) and build a network that is more closely allied with the actual "neurons" of a digital computer system (logic gates). These networks are called Adaptive Logic Networks (ALN). The way they work is by creating a series of linear functions that approximate a curve. The process looks something like this:

enter image description here

... where the X axis represents some input to the ALN, and the Y axis represents some output. Now imagine the number of linear functions expanding as needed to improve the accuracy, and imagine that process occurring across n arbitrary dimensions, implemented entirely with AND and OR logic gates, and you have some sense of what an ALN looks like.

ALNs have certain, very interesting characteristics:

  1. They are fairly easily trainable,
  2. They are very predictable, i.e. slight changes in input do not produce wild swings in output,
  3. They are lightning fast, because they are built in the shape of a logic tree, and operate much like a binary search.
  4. Their internal architecture evolves naturally as a result of the training set

So a phenotropic program would look something like this; it would have a "surface" for input, a predictable architecture and behavior, and it would be tolerant of noisy inputs.

Further Reading
An Introduction to Adaptive Logic Networks With an Application to Audit Risk Assessment
"Object Oriented" vs "Message Oriented," by Alan Kay

OTHER TIPS

I think we are at the very beginning of one of the steps it will take to get there and that is gathering lots of data in formats that can be analyzed. The Internet, Google searches, Fitbit (Every step you take, every move you make, I'll be watching you.), FourSquare, a smart phone geo location, Facebook posts and SO question data are all being gathered. We're no where near the amount of sensory data the average human is compiling over a life time, but we're getting close.

Start categorizing millions of pictures of birds and getting feedback from people telling you that's not a bird and you can start to create an algorithm. From there a fuzzier impression (I would call it a model, but that's too exact for what we're trying to code.) can be created.

class Birdish

How does a pet dog know so much about the owner? Because it watches her a lot. The dog has listened to cars pulling into the driveway and correlating that with the owner opening the front door that it appears as of the dog can recognize a car by its sound. We could do this too, but we see no reason to attend to this. And that's what's wrong with current software, it doesn't pay attention to what the user is doing. It just waits for the user to do what IT expects the user to do.

Something as simple as setting an alarm clock could be done by a little observation/analysis of my current habits. We gave up on setting VCR timers before the technology was replaced by digital. And would that have happened as fast if we could have interfaced the TV Guide with the VCR? I've watched the same TV show 4 weeks in a row at the same time, but the 5th one I didn't even turn the TV on. Obviously I want it recorded. Can't you tell I'm staying late at work writing this post and with my typical commute won't make it home in time? You've got the data, do the math.

Gather more and more data and then you can come up with better ways to analyze, recognize and covert it. We're going beyond what can be input only from a keyboard with our phone cameras and soon eye glass cameras. It's just the beginning.

Here is a slide set for defining a Probabilistic Programming Language in Scala.

It is the first decent implementation example for some of the core components to the system that Jaron Lanier proposes.

A thought I had recently :

If you used high-level ideas like Haskell's Maybe Monad to wrap remote-procedure calls to other systems. You send a request to the server. But nothing comes back (server is broken). Or a Promise comes back (server is busy) and your programs continue working with those None or Promised values. That's kind of like the fault tolerance Lanier is looking for.

Maybe there are ways of encapsulating other eventualities. For example remote calls which come back with an approximation which is increasingly refined over time by some kind of background negotiation. ie. what comes back is something like a Promise but not just "keep holding on and working with this and a proper value will turn up shortly" but "keep holding on and working with this and a better approximation will turn up shortly". (And again, and again). This would be able to hide a lot of faults from the programmer, just as networking protocols hide a lot low level networking failure from the programmer.

Licensed under: CC-BY-SA with attribution
scroll top