AI3 Goals

by Javantea
Jan 2, 2013

AltSci will soon have a proper server with a lot of bandwidth. I just bought a $1000 server off Newegg and it's going to be fast. Intel Xeon E3-1230 V2 Ivy Bridge 3.3GHz (3.7GHz Turbo) 8 threads, 16 GB ECC RAM, 240GB SSD. I'm going to send it to a datacenter with a lot of bandwidth to do some fun projects.

As I spend less and less time working on my new website AI3, I have to choose the most important features that I want to work on. Completeness is important (the page on Emma Goldman is missing everything after the header), social features are important (users aren't implemented yet), UI is important, and adding more data sources is important to my users (Creative Commons blogs will be added as time goes on), but more important to me is functionality.

AI3 was not designed with a small set of features in mind. AI3 was designed to improve by adding artificially intelligent algorithms one by one into the site so that a natural language parser need only request the data it wants and use that to formulate an intelligent idea from it. In its current form it can answer 4 out of 5 key types of questions that we are able to answer given the data set. That's good progress. How do we learn to answer statements, commands, and more difficult questions? Answering anything is a matter of dissecting the query and response for a pattern. We just so happen to have a few systems of pattern matching that we can use to work on this problem.

The most rigid system of pattern matching is hardly worth mentioning. We use it for commands issued at a command prompt sometimes:
sentence.strip().lower() == "hello."
Slightly more useful is splitting and testing.

words, punctuation = stripPunctuation(sentence.strip().lower())
matches = []
for word in words:
    if word in knownWords:
    #end if
#next word

The obvious weakness here is that you have the problem of either too loose or too strict matching. How do we pattern match both:
I am a fireman.
I work as a fireman.
without writing two patterns? Let's look at this closer. If I wrote a two level pattern system, I could turn "I am a fireman." into (isa I fireman). Then I could turn "I work as a fireman." into (worksAs I fireman). Then my code needs to translate (worksAs I fireman) into (isa I fireman). That means I need 3 patterns to do what I could do with 2. I'm going backwards aren't I? But wait, I've got a good idea. What if I could use another level of abstraction? A part of speech tagger is capable of turning a sentence into a list of tags. Therefore our first pattern system works on all verbs. Let's look at an example:

I    am  a  fireman . 
PPSS BEM AT NN      . 
I    work as a  fireman . 
PPSS VB   CS AT NN      . 

Using the Brown tagging system found in NLTK we are able to get pronoun, verb, to be, article, conjunction, and noun. The correct tag for "as" in this use is preposition, but it's only a 90% accurate tagger. So we are able to say PPSS BEM AT NN translates into ((I, we, they, you), (am, are), (a, the), noun) therefore it can be translated into (isa word[0] word[3]). First one is now bagged and we get "we are a group" and all other sentences that end up with the tags "PPSS BEM AT NN" as well. The second sentence is even easier. We say that PPSS VB CS AT NN translates into ((I, we, they, you), verb, (that as after whether before while like because if since for than until so unless though providing once lest till whereas whereupon supposing albeit then), (a, the), noun). If you look at the long list of conjunctions, you will see that most don't work: I verb since a noun. But that doesn't matter because we can just translate that into: ((word[1] word[2]) word[0] word[4]). Done. Guess how many combinations we have with this little trick? Assuming 10 valid verbs (might be possible), 10 valid nouns, we have 100 valid sentences that we can answer. How do we detect invalid sentences? Anything that doesn't immediately resolve is something that we don't understand, so we can simply take the input check it against our system and if we get an output, we understand it. If it's invalid, we give them the same response as if we aren't intelligent enough to understand. That may not be ideal, but it's a good choice.

So how do we turn (isa I fireman) into a response? First think about context. If we are at the dialtone with someone (we don't know them, they don't know us), saying "I am a fireman" should not elicit an answer. If we are talking and there's a moment of silence and you say "I am a fireman," I may just accept that as an awful icebreaker. "Oh cool" is valid response by a person of my age. When I told my friends "I am a hacker," they said "That's cool, we like hackers," which is a wonderful thing to hear in response. So in order to respond, we have to have a model for communication already setup to which we can supply data. The first sentence which matches our pattern can be found using the similar function aka Words Used Together. "I am a Jew" is a surprisingly difficult statement to answer. If the AI is also a Jew (unlikely I know), it can respond "I am too." Since none of my AIs are particularly religious (which could change after they read a bit more), I think the proper response in dialtone is the good old: "Okay." This type of greeting should warm the hearts of Seattlites and cold fish alike. Statements that are more topical may elicit a more lively response. So now many patterns do I have to write before this system becomes a bit more intelligent than describing Abraham Lincoln in a sentence? Using partial matches (which no doubt will involve false positives and so forth) I need to write dozens or even hundreds of tag matches to deal with the various intricacies of speech. Good news though, once I write a handful of patterns I can get examples by searching the database. How cool is that? My next feature will be to run my current tag AI on the entire database. Sounds fun.

Javantea out.


Comments: 0

Leave a reply »

  • Leave a Reply
    Your gravatar
    Your Name