A few years ago I wrote on another site about something called Google Duplex that was being described as the world’s most lifelike chatbot. It could carry out natural conversations by mimicking a human voice. This “assistant” could autonomously complete tasks such as calling to book an appointment, making a restaurant reservation, or calling the library to verify their hours. Duplex can complete most tasks autonomously, it can also recognize situations that it is unable to complete and then signal a human operator to finish the task. Duplex speaks in a more natural voice and language by incorporating “speech disfluencies” such as filler words like “hmm” and “uh” and using common phrases such as “mhm” and “gotcha.” It also is programmed to use a more human-like intonation and response latency.
Was this a wonderful advancement in AI and language processing? Perhaps, but it also met with criticism. Since last December, there has been a lot of press in academic communities and then in the mainstream press about AI chatbots that are open to anyone to use. It’s hard to keep up with all of them but ChatGPT is the one that gets the most attention. You can read more about that on my other edtech blog if that interests you, but on a simpler level, one thing that I thought of was the Turing Test.
You may know the amazing and ultimately tragic story of Alan Turing because of the excellent film The Imitation Game starring Benedict Cumberbatch as Alan. He is sometimes considered to be the Father of AI. He is remembered for developing the computer that broke the German Enigma code during World War II when he was the cryptanalyst at the Government Code and Cypher School in Bletchley Park, Buckinghamshire, England.

Developed by Alan Turing in 1950, it is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. For example, when communicating with a machine via speech or text, can the human tell that the other participant is a machine? If the human can’t tell that the interaction is with a machine, the machine passes the Turing Test.
Should a machine have to tell you if it’s a machine? After the Duplex announcement, people started posting concerns about the ethical and societal questions of this use of artificial intelligence.
Privacy — a real hot-button issue right now — is another concern. Your conversations with Chatbots are recorded in order for the virtual assistant to analyze and respond and “learn.” A few years ago, Microsoft purchased a company called Semantic Machines for its “conversational AI.” That is their term for computers that sound and respond like humans.
You might have some of this technology in your home or in your hand. Digital assistants like Microsoft’s Cortana, Apple’s Siri, Amazon’s Alexa or Bixby on Samsung are AI that will talk to you with varying success. I do ask Alexa about the weather and local traffic daily. I sometimes ask Siri for directions when I’m hands-free in my car.
Would Alan want a Turing test that could tell us that we are talking to a machine? In that case, a failed reverse-Turing test result is what we would want to tell us that we are dealing with a machine and not a human. That is something some educators want: something to tell them that what a student turned in was written by a bot. There are people working on such things now.
Many educators at all grade levels from elementary to graduate school have been concerned about AI writing papers for students. The results have been mixed. I recently tried testing out ChatGPT for some assignments and found it gave some reasonable starting places but certainly would never get a good grade from me.
These AI tools are not meant to replace humans but to carry out very specific tasks. That doesn’t mean that that might replace humans at some point in the future for some things. For example, they aren’t meant to replace a doctor or therapist, but people are asking them those kinds of questions. Let’s hope the answers are accurate. If a bot can book a table at that restaurant for you or help you with a support issue about your computer, phone, bank or service that prevents you from being on hold for a half hour, I’m okay with that.
There is what is known as an “uncanny valley” for AI, especially for those that use a human voice or look like a human, such as humanoid robots and even online animation. That valley is where things get too close to being human and we feel like we are in the “creepy treehouse in the uncanny valley.”
I think Turing would be fascinated by this new AI which he had basically predicted. I don’t know how he would feel about the ethics of its use, but based o his Turing test I think he would certainly want ways of letting us know that we were interacting with AI rather than a human.
Will this technology be misused? Absolutely. That always happens, no matter how much testing is done. Should we move forward with the research? Well, no one is asking for my approval, but I say Yes.
A taste of the film and Turing’s story.