Question

I've been learning Lisp to expand my horizons because I have heard that it is used in AI programming. After doing some exploring, I have yet to find AI examples or anything in the language that would make it more inclined towards it.

Was Lisp used in the past because it was available, or is there something that I'm just missing?

Was it helpful?

Solution

Lisp WAS used in AI until the end of the 1980s. In the 80s, though, Common Lisp was oversold to the business world as the "AI language"; the backlash forced most AI programmers to C++ for a few years. These days, prototypes usually are written in a younger dynamic language (Perl, Python, Ruby, etc) and implementations of successful research is usually in C or C++ (sometimes Java).

If you're curious about the 70's...well, I wasn't there. But I think Lisp was successful in AI research for three reasons (in order of importance):

  1. Lisp is an excellent prototyping tool. It was the best for a very long time. Lisp is still great at tackling a problem you don't know how to solve yet. That description characterises AI perfectly.
  2. Lisp supports symbolic programming well. Old AI was also symbolic. It was also unique in this regard for a long time.
  3. Lisp is very powerful. The code/data distinction is weaker so it feels more extensible than other languages because your functions and macros look like the built-in stuff.

I do not have Peter Norvig's old AI book, but it is supposed to be a good way to learn to program AI algorithms in Lisp.

Disclaimer: I am a grad student in computational linguistics. I know the subfield of natural language processing a lot better than the other fields. Maybe Lisp is used more in other subfields.

OTHER TIPS

Lisp is used for AI because it supports the implementation of software that computes with symbols very well. Symbols, symbolic expressions and computing with those is at the core of Lisp.

Typical AI areas for computing with symbols were/are: computer algebra, theorem proving, planning systems, diagnosis, rewrite systems, knowledge representation and reasoning, logic languages, machine translation, expert systems, and more.

It is then no surprise that many famous AI applications in these domains were written in Lisp:

  • Macsyma as the first large computer algebra system.
  • ACL2 as a widely used theorem prover, for example used by AMD.
  • DART as the logistics planner used during the first Gulf war by the US military. This Lisp application alone is said to have paid back for all US investments in AI research at that time.
  • SPIKE, the planning and scheduling application for the Hubble Space Telescope. Also used by several other large telescopes.
  • CYC, one of the largest software systems written. Representation and reasoning in the domain of human common sense knowledge.
  • METAL, one of the first commercially used natural language translation systems.
  • American Express' Authorizer's Assistant, which checks credit card transactions.

There are thousands of applications in these areas that are written in Lisp. Very common for those is that they need special capabilities in the area of symbolic processing. One implements special languages that have special interpreters/compilers in these domains on top of Lisp. Lisp allows one to create representations for symbolic data and programs and can implement all kinds of machinery to manipulate these expressions (math formulas, logic formulas, plans, ...).

(Note that lots of other general purpose programming languages are used in AI, too. I have tried to answer why especially Lisp is used in AI.)

One reason is that it allows you to extend the language with constructs specific for your domain, making it, effectively, a domain specific language. This technique is incredibly powerful as it allows you to reason about the problem you are solving, rather than about shuffling bits.

My guess has always been that, being a functional language, it doesn't differentiate between code and data. Everything, including function definitions and function calls can be treated as lists and modified like any other piece of data.

So self-inspecting, self-modifying code could be written easily.

One possible answer is that AI is a collection of very hard problems, and Lisp is a good language for solving hard problems, not just AI.

As to why that is: macros, generic functions, and rich introspection allow for concise code and easy introduction of domain abstractions — it's a language that you can make more powerful. For a lot of problems that's unnecessary, and it comes with its own costs, but for other problems that power is needed to make any headway.

I think it's wrong to think about this in terms of AI only. Things like the AI-winter and commercial effects on common lisp are distracting if you're asking why it was used for AI, not why it's not often used now ...

Anyway, I think it's because most of the AI code was essentially research code. Lisp is a great language for exploratory programming, for implementing difficult algorithms, for self-modifying and often modified code. In other words, for research code.

I use lisp today for some of my research code (mathematics, signal processing) because it's more flexible and powerful than most languages while still generating more efficient code than most languages. I can typically get performance within a factor of +/- 2 of say c++ speed, but I can implement things much faster, and deal with complexity that would take me far more time than I have if I used c++, java, c#.

That's wandering off topic though. I think AI code was primarily written in common lisp for a while because it is a powerful approach to research code. It still is; but as `AI' algorithms became better understood and explored, parts of them were much easier to teach and use, so they showed up in flavor-of-the-year languages in undergrad courses. From there, it becomes an issue of what people already know, what libraries are available, and what works well for large groups.

I'd guess that a big reason was the flexibility of lists as a basic data structure.

at the time, being able to turn them into all kind of composite objects, and new things as message passings and polimorphism, made it the language of choice; not specifically for AI, but for big, complex, tasks. especially when they were experimenting with concepts.

I think you're right: Lisp was a handy tool for hacking things up. This is because it didn't distinguish much between program and data. This allowed hackers to manipulate functions very easily, just like data.

But lisp is quite difficult for humans to read, with its braces and non-distinction between data and program. Today, I won't use lisp for any production AI code (or perhaps even prototyping) but would much prefer python for scripting.

Another thing to consider is the existing libraries/tools in/related to the language. I am not in a position to compare lisp libraries with python libraries, but I guess libraries and open source matter a lot more now than before.

This answer was inspired by the following comparison between lisp and python: http://amitp.blogspot.com/2007/04/lisp-vs-python-syntax.html

I remember hearing that, being a functional language, Lisp was a very good choice for implementing recursive algorithms. Being able to track down a tree and work your way back is essential when considering the decision making processes (traversal) and end result (leaf node).

This was told to me during an AI course at university where we studied Lisp.

A more cynical answer might be "because it lost a political AI war between Japan and the USA in the 1980s". There is an fun blog post that speculates about the impact of the Fifth-Generation Computer System demise on the Prolog.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top