Created by Alan Turing, the test is designed to tell whether or not a program is actually thinking. The test itself has a long chain of proponents and opponents and variations, making it simplistic (to this author) to say the program “passed” the test. The Loebner Prize consists of a text conversation with the program and a human participant simultaneously for a 25 minute span. In the case of “Suzette,” the judge decided incorrectly which was human and which was machine.
While I admit to the possibility that conversation with various individuals can have different outcomes and flow, my own experience with “Suzette” left me feeling a bit unimpressed–observe below:
With the capability to learn and parse more and more data I can see this conversation going a bit differently in the future–maybe so much differently that I won’t be able to tell the difference between Suzette’s and a human’s response.
I think the most important question at this point would be, is that a good thing or not?
I could probably go either way in answer to that question. On the one hand, I’m a huge fan of emerging technology. I’d love to see the positive changes AI could bring to science, medicine, and computing. I don’t necessarily think Eric Schmidt was wrong when he said “It’s a bug that cars were invented before computers.” and that’s just a fraction of the capability of AI.
As a guy who’s seen too many apocalyptic sci-fi flicks, though, I have to wonder. Terminator, the Matrix, it can all happen too right? What safeguards can you build into an intelligence that is capable of of increasing its capability exponentially? Let me know what you think in the comments.
Like this? You might also enjoy…
- Doctors Perform First Fully Robotic Surgery
- Are Robots Really Stealing Human Jobs?
- The ACLU, Science Fiction, and Human Rights
Follow GeekTech on Twitter or Facebook, or subscribe to our RSS feed.