Colin McEnroe Show: Is Watson The Harbinger Of An A.I. Armageddon?

Wolfram|Alpha explains the latest A.I. craze.

Video
Image
Video Playlist
Watson
Flickr Creative Commons, p_a_h
Colin McEnroe Show: Is Watson The Harbinger Of A.I. Armageddon?
Download Audio
Audio Playlist
Colin McEnroe Show: Is Watson The Harbinger Of A.I. Armageddon?

You are surrounded by Artificial Intelligence. It's in your smartphone, in your DVR, probably even in your refrigerator.  

There's something called the AI Effect, which refers to to the way we quickly incorporate the latest advance into our understanding of what the world is. A talking navigation system, for example, changed status from a wild futuristic imagining to a humdrum standard feature in just a few years.
 
The joke among researchers is that AI means anything that hasn't been done yet.  We just keep moving the goal posts and saying "Well, maybe it can solve the tricky clues on Jeopardy, but that's not real intelligence." 
 
But what is real intelligence? Machines tend to lag behind in odd areas, like spatial orientation, small talk and object recognition. Watson may win the Jeopardy prize, but it wouldn't be able to go out for a beer with afterwards. 
 
Or would it?
 
Leave your comments below, e-mail colin@wnpr.org or Tweet us @wnprcolin.

  

Comments

IBM's Watson: personal observations.

I did not get a chance to call in when this show was on (cell phone use while driving is frowned on). I was working at IBM during Watson's original testing phase and got to run some tests (i.e. play Jeopardy!) against it. I enjoyed the show and wanted to share my comments better late than never.

The most important thing I told the Watson crew was to carefully study the wrong answers. One thing I noticed (which both you and the Wolfram-Alpha speaker pointed out on your show) is, all too frequently, Watson's wrong questions weren't just wrong, they were 'way wrong. There isn't room here for my paper on my observations (which I sent to the Watson team after playing it), and while you hit on many of them I had a few which were not covered on your show.

The key point is this: when Watson got right questions the developers learned nothing; when Watson got the wrong answers the developers should have gained a lot of knowledge about how Watson's strategy fails in the real world. Your guest hit on one aspect of this regarding "or" questions: when Watson had to answer "A or B", it often gave an answer with A correct but B incorrect because of the "confidence problem"... that is, Watson assigned a high confidence to the A part and that confidence swamped the "low confidence" answer B would provide.

On a side note, my colleagues testing Watson often lost or tied with it; even with wins most scores were close. In my case I thrashed it (53-1 and 48-0). I was laid off shortly thereafter... ironic, isn't it?

E-mail from Megan

As I was watching Jeopardy last night I was reminded of the Heinlein book "The Moon is Harsh Mistress" and wondering how long it would be before they can train computers to understand jokes and what kind of programming would go in to something like that.