Question

Today, I asked a psychologist how can you design an IQ-test to assess a man more intelligent than the designer? He answered me, the same way you can design a chess game program, the designer cannot win it!

However, I'm not sure as a beginner if the question can be answered here, but it is interesting for me if we can write a program can evolve and learn by itself such that a human (even the programmer) cannot predict it. I hope the answer to be false, otherwise, there may be viruses or worms in the future with not predictable behavior, controlling the human society!

Was it helpful?

Solution

Artificial intelligence agents behave within some programmed space (a chess-playing agent is inside the chess-playing space).

Agents cannot leave the programmed space. A chess-playing agent is unlikely to take over the world any time soon. It is predictable in this sense.

The behaviour in this space is somewhat predictable (this behaviour is after all based on well-defined mathematical equations) (these equations are usually quite complex, so not easily predictable, but possible), but there is usually some randomness involved, which is obviously not predictable.

Note "intelligence" is not the same as predictability. Researchers have been trying to make AI truly intelligent for a long time, with (arguably) slow progression.

EDIT:

Note that some agents can have its programmed space be the entire world. This doesn't enforce a lot of boundaries.

By 'programmed space' I don't mean what was programmed into the agent as much as what the agent is programmed to observe or do. If an agent can only see a chess board and only make chess moves, how will it ever become more than a chess-playing agent?

True evolution may allow agents to extend their programmed space, but I'll have to think about whether this is actually possible.

OTHER TIPS

It is possible. Chess programs actually do beat their designers by wide margins. They beat the world-champions in Chess by smaller margins but that is a matter of time.

There is an example for a system that "learns how to learn even faster": evolution on earth has optimized itself. Genes and behavior are optimized to facilitate a high rate of marginal improvement. Reproduction almost never fails and genes have just the right amount of mutation due to natural defects (radiation, chemical processes, ...). The "tunables" have been set nicely.

I think your text in the question describes two situations:

  1. The first alinea covers an IQ-test and a chess-game. Both have a limited amount of options. Even though there are a lot of possibilities, it's limited to a certain number and a lot can be ruled out from start since their use is too low to even consider. Therefore, programs like these exist. Do notice though, there are still A LOT of possibilities, and that's why it isn't perfect yet.

  2. The second alinea covers a self learning program or robot. In the real world, there is an infinite amount of possibilities, things that can happen. You might try to code a program, but there is no way (in the near future) you can take amount of all the things life has to offer.

I do have to comment on Dukeling's comment below. If you manage to code a program which can learn to react on pretty much everything life has to offer (including the negative parts), an AI like that will probably evaluate his own 'space' and will be able to look and even step outside of it.

Long story short: it will happen. What its result will be is unknown: either robots are programmed perfectly, or will be shutdown, or the human race will be extinct. Every scenario is possible thanks to the fact we will advance in technology at such pace you can't even imagine right now. Have your doubts? Go tell someone 50 years ago a machine beats the best player in chess.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top