Question

I want to build a bot that asks someone a few simple questions and branches based on the answer. I realize parsing meaning from the human responses will be challenging, but how do you setup the program to deal with the "state" of the conversation?

It will be a one-to-one conversation between a human and the bot.

Was it helpful?

Solution

You probably want to look into Markov Chains as the basics for the bot AI. I wrote something a long time ago (the code to which I'm not proud of at all, and needs some mods to run on Python > 1.5) that may be a useful starting place for you: http://sourceforge.net/projects/benzo/

EDIT: Here's a minimal example in Python of a Markov Chain that accepts input from stdin and outputs text based on the probabilities of words succeeding one another in the input. It's optimized for IRC-style chat logs, but running any decent-sized text through it should demonstrate the concepts:

import random, sys

NONWORD = "\n"
STARTKEY = NONWORD, NONWORD
MAXGEN=1000

class MarkovChainer(object):
    def __init__(self):
        self.state = dict()

    def input(self, input):
        word1, word2 = STARTKEY
        for word3 in input.split():
            self.state.setdefault((word1, word2), list()).append(word3)
            word1, word2 = word2, word3 
        self.state.setdefault((word1, word2), list()).append(NONWORD)

    def output(self):
        output = list()
        word1, word2 = STARTKEY
        for i in range(MAXGEN):
            word3 = random.choice(self.state[(word1,word2)])
            if word3 == NONWORD: break
            output.append(word3)
            word1, word2 = word2, word3
        return " ".join(output)

if __name__ == "__main__":
    c = MarkovChainer()
    c.input(sys.stdin.read())
    print c.output()

It's pretty easy from here to plug in persistence and an IRC library and have the basis of the type of bot you're talking about.

OTHER TIPS

Folks have mentioned already that statefulness isn't a big component of typical chatbots:

  • a pure Markov implementations may express a very loose sort of state if it is growing its lexicon and table in real time—earlier utterances by the human interlocutor may get regurgitated by chance later in the conversation—but the Markov model doesn't have any inherent mechanism for selecting or producing such responses.

  • a parsing-based bot (e.g. ELIZA) generally attempts to respond to (some of the) semantic content of the most recent input from the user without significant regard for prior exchanges.

That said, you certainly can add some amount of state to a chatbot, regardless of the input-parsing and statement-synthesis model you're using. How to do that depends a lot on what you want to accomplish with your statefulness, and that's not really clear from your question. A couple general ideas, however:

  • Create a keyword stack. As your human offers input, parse out keywords from their statements/questions and throw those keywords onto a stack of some sort. When your chatbot fails to come up with something compelling to respond to in the most recent input—or, perhaps, just at random, to mix things up—go back to your stack, grab a previous keyword, and use that to seed your next synthesis. For bonus points, have the bot explicitly acknowledge that it's going back to a previous subject, e.g. "Wait, HUMAN, earlier you mentioned foo. [Sentence seeded by foo]".

  • Build RPG-like dialogue logic into the bot. As your parsing human input, toggle flags for specific conversational prompts or content from the user and conditionally alter what the chatbot can talk about, or how it communicates. For example, a chatbot bristling (or scolding, or laughing) at foul language is fairly common; a chatbot that will get het up, and conditionally remain so until apologized to, would be an interesting stateful variation on this. Switch output to ALL CAPS, throw in confrontational rhetoric or demands or sobbing, etc.

Can you clarify a little what you want the state to help you accomplish?

Imagine a neural network with parsing capabilities in each node or neuron. Depending on rules and parsing results, neurons fire. If certain neurons fire, you get a good idea about topic and semantic of the question and therefore can give a good answer.

Memory is done by keeping topics talked about in a session, adding to the firing for the next question, and therefore guiding the selection process of possible answers at the end.

Keep your rules and patterns in a knowledge base, but compile them into memory at start time, with a neuron per rule. You can engineer synapses using something like listeners or event functions.

I think you can look at the code for Kooky, and IIRC it also uses Markov Chains.

Also check out the kooky quotes, they were featured on Coding Horror not long ago and some are hilarious.

I think to start this project, it would be good to have a database with questions (organized as a tree. In every node one or more questions). These questions sould be answered with "yes " or "no".

If the bot starts to question, it can start with any question from yuor database of questions marked as a start-question. The answer is the way to the next node in the tree.

Edit: Here is a somple one written in ruby you can start with: rubyBOT

naive chatbot program. No parsing, no cleverness, just a training file and output.

It first trains itself on a text and then later uses the data from that training to generate responses to the interlocutor’s input. The training process creates a dictionary where each key is a word and the value is a list of all the words that follow that word sequentially anywhere in the training text. If a word features more than once in this list then that reflects and it is more likely to be chosen by the bot, no need for probabilistic stuff just do it with a list.

The bot chooses a random word from your input and generates a response by choosing another random word that has been seen to be a successor to its held word. It then repeats the process by finding a successor to that word in turn and carrying on iteratively until it thinks it’s said enough. It reaches that conclusion by stopping at a word that was prior to a punctuation mark in the training text. It then returns to input mode again to let you respond, and so on.

It isn’t very realistic but I hereby challenge anyone to do better in 71 lines of code !! This is a great challenge for any budding Pythonists, and I just wish I could open the challenge to a wider audience than the small number of visitors I get to this blog. To code a bot that is always guaranteed to be grammatical must surely be closer to several hundred lines, I simplified hugely by just trying to think of the simplest rule to give the computer a mere stab at having something to say.

Its responses are rather impressionistic to say the least ! Also you have to put what you say in single quotes.

I used War and Peace for my “corpus” which took a couple of hours for the training run, use a shorter file if you are impatient…

here is the trainer

#lukebot-trainer.py
import pickle
b=open('war&peace.txt')
text=[]
for line in b:
    for word in line.split():
        text.append (word)
b.close()
textset=list(set(text))
follow={}
for l in range(len(textset)):
    working=[]
    check=textset[l]
    for w in range(len(text)-1):
        if check==text[w] and text[w][-1] not in '(),.?!':
            working.append(str(text[w+1]))
    follow[check]=working
a=open('lexicon-luke','wb')
pickle.dump(follow,a,2)
a.close()

here is the bot

#lukebot.py
import pickle,random
a=open('lexicon-luke','rb')
successorlist=pickle.load(a)
a.close()
def nextword(a):
    if a in successorlist:
        return random.choice(successorlist[a])
    else:
        return 'the'
speech=''
while speech!='quit':
    speech=raw_input('>')
    s=random.choice(speech.split())
    response=''
    while True:
        neword=nextword(s)
        response+=' '+neword
        s=neword
        if neword[-1] in ',?!.':
            break
    print response

You tend to get an uncanny feeling when it says something that seems partially to make sense.

I would suggest looking at Bayesian probabilities. Then just monitor the chat room for a period of time to create your probability tree.

I'm not sure this is what you're looking for, but there's an old program called ELIZA which could hold a conversation by taking what you said and spitting it back at you after performing some simple textual transformations.

If I remember correctly, many people were convinced that they were "talking" to a real person and had long elaborate conversations with it.

If you're just dabbling, I believe Pidgin allows you to script chat style behavior. Part of the framework probably tacks the state of who sent the message when, and you'd want to keep a log of your bot's internal state for each of the last N messages. Future state decisions could be hardcoded based on inspection of previous states and the content of the most recent few messages. Or you could do something like the Markov chains discussed and use it both for parsing and generating.

If you do not require a learning bot, using AIML (http://www.aiml.net/) will most likely produce the result you want, at least with respect to the bot parsing input and answering based on it.

You would reuse or create "brains" made of XML (in the AIML-format) and parse/run them in a program (parser). There are parsers made in several different languages to choose from, and as far as I can tell the code seems to be open source in most cases.

You can use "ChatterBot", and host it locally using - 'flask-chatterbot-master"

Links:

  1. [ChatterBot Installation] https://chatterbot.readthedocs.io/en/stable/setup.html
  2. [Host Locally using - flask-chatterbot-master]: https://github.com/chamkank/flask-chatterbot

Cheers,

Ratnakar

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top