What Would Alan Turing Say About AI Chatbots?

A few years ago I wrote on another site about something called Google Duplex that was being described as the world’s most lifelike chatbot. It could carry out natural conversations by mimicking a human voice. This “assistant” could autonomously complete tasks such as calling to book an appointment, making a restaurant reservation, or calling the library to verify their hours. Duplex can complete most tasks autonomously, it can also recognize situations that it is unable to complete and then signal a human operator to finish the task. Duplex speaks in a more natural voice and language by incorporating “speech disfluencies” such as filler words like “hmm” and “uh” and using common phrases such as “mhm” and “gotcha.” It also is programmed to use a more human-like intonation and response latency.

Was this a wonderful advancement in AI and language processing? Perhaps, but it also met with criticism. Since last December, there has been a lot of press in academic communities and then in the mainstream press about AI chatbots that are open to anyone to use. It’s hard to keep up with all of them but ChatGPT is the one that gets the most attention. You can read more about that on my other edtech blog if that interests you, but on a simpler level, one thing that I thought of was the Turing Test.

You may know the amazing and ultimately tragic story of Alan Turing because of the excellent film The Imitation Game starring Benedict Cumberbatch as Alan. He is sometimes considered to be the Father of AI. He is remembered for developing the computer that broke the German Enigma code during World War II when he was the cryptanalyst at the Government Code and Cypher School in Bletchley Park, Buckinghamshire, England.

Alan Turing statue at Bletchley Park depicts him sitting at an ENIGMA ciphering machine

Developed by Alan Turing in 1950, it is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. For example, when communicating with a machine via speech or text, can the human tell that the other participant is a machine? If the human can’t tell that the interaction is with a machine, the machine passes the Turing Test.

Should a machine have to tell you if it’s a machine? After the Duplex announcement, people started posting concerns about the ethical and societal questions of this use of artificial intelligence.

Privacy — a real hot-button issue right now — is another concern. Your conversations with Chatbots are recorded in order for the virtual assistant to analyze and respond and “learn.” A few years ago, Microsoft purchased a company called Semantic Machines for its “conversational AI.” That is their term for computers that sound and respond like humans.

You might have some of this technology in your home or in your hand. Digital assistants like Microsoft’s Cortana, Apple’s Siri, Amazon’s Alexa or Bixby on Samsung are AI that will talk to you with varying success. I do ask Alexa about the weather and local traffic daily. I sometimes ask Siri for directions when I’m hands-free in my car.

Would Alan want a Turing test that could tell us that we are talking to a machine? In that case, a failed reverse-Turing test result is what we would want to tell us that we are dealing with a machine and not a human. That is something some educators want: something to tell them that what a student turned in was written by a bot. There are people working on such things now.

Many educators at all grade levels from elementary to graduate school have been concerned about AI writing papers for students. The results have been mixed. I recently tried testing out ChatGPT for some assignments and found it gave some reasonable starting places but certainly would never get a good grade from me.

These AI tools are not meant to replace humans but to carry out very specific tasks. That doesn’t mean that that might replace humans at some point in the future for some things. For example, they aren’t meant to replace a doctor or therapist, but people are asking them those kinds of questions. Let’s hope the answers are accurate. If a bot can book a table at that restaurant for you or help you with a support issue about your computer, phone, bank or service that prevents you from being on hold for a half hour, I’m okay with that.

There is what is known as an “uncanny valley” for AI, especially for those that use a human voice or look like a human, such as humanoid robots and even online animation. That valley is where things get too close to being human and we feel like we are in the “creepy treehouse in the uncanny valley.”

I think Turing would be fascinated by this new AI which he had basically predicted. I don’t know how he would feel about the ethics of its use, but based o his Turing test I think he would certainly want ways of letting us know that we were interacting with AI rather than a human.

Will this technology be misused? Absolutely. That always happens, no matter how much testing is done. Should we move forward with the research? Well, no one is asking for my approval, but I say Yes.

A taste of the film and Turing’s story.

Planetary Intelligence

If I asked you about “planetary intelligence,” you might sarcastically say that there doesn’t seem to be very much of it. So, let me adjust your definition.

I came across the book, Ways of Being, which is about the different kinds of intelligence on our planet. That includes plant, animal, human, and artificial intelligence,

What does it mean to be intelligent? A typical answer to that from most people might be a discussion of people being “smart.” There might be some distinction between the knowledge ones acquires from reading and school and another kind of intelligence that seems to be natural or acquired outside school. But the focus would be on human intelligence.

Is intelligence something unique to humans? I’m sure that in centuries past, the idea that plants and even other animals could be “intelligent” wouldn’t be accepted. That has changed in the past 200 years and the much more recent advances in “artificial intelligence” have made the definition of intelligence itself much broader.

A dictionary might define intelligence as the ability to acquire and apply knowledge. Is that what plants and animals are doing when they adapt to changing ecosystems or communicate with each other? The intelligence of animals, plants, and the natural systems that surround us are being more closely studied and show us complexity and knowledge that we never knew existed.

The book’s author is James Bridle who is a technologist, artist, and philosopher who uses biology, physics, computation, literature, art, and philosophy to examine Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence. His goal is to find what can we learn from other forms of intelligence can make ourselves and the planet better. Maybe this new way of thinking about intelligence can even improve our technologies, societies, and even politics. Can we live better and more equitably with one another and the nonhuman world?

I listened to the book on audio and had to stop and rewind a few times. It can get pretty far out from what we normally think about intelligence.

One concept that stands out is “emergence.” That is a word used in many fields today. The shape of weather phenomena, such as hurricanes, are emergent structures. The development and growth of complex, orderly crystals within a natural environment is another example of an emergent process. Crystalline structures and hurricanes are said to have a self-organizing phase. Are they intelligent?

Water crystals forming on glass demonstrate an emergent, fractal process.

A few years ago, I read Bridle’s earlier book New Dark Age. It is indeed a dark look at the Internet, information overload, conspiracy theories, algorithms, and artificial intelligence. The latter seems to have grabbed hold of him and, though there is some optimism in the new book, his vision of AI is still dark.

While proponents of artificial intelligence still portray it as our friend or companion, AI often seems to be something to fear as it is strange in ways that seem like science fiction. Bridle doesn’t say it but AI sometimes seems to be more of “alien intelligence” than “artificial intelligence.” Not that it comes from other places in the universe, but that much like the sci-fi tales where aliens came to conquer our planet, AI might be an intelligence that will try to supplant us.

Okay, I’ll stop there because now I’m venturing into conspiracy theory land myself.

Talking to My Artificial Intelligence

These days you see a lot of complaining about the lack of control we seem to have on the development of our technology. Perhaps the biggest complaint (or maybe it’s fear) is about artificial intelligence.

I wrote earlier about the idea that we might be living in a computer simulation, and since then I  watched a short film, “Escape,” about an AI (artificial intelligence) that visits the person programming it in our present from its future existence. A bit of artificial intelligence time travel

This AI is the 21st-century version of HAL 9000, the onboard computer in Stanley Kubrick’s 2001: A Space Odyssey. It illustrates the unclear line between potential machine benevolence and malevolence. The HAL computer with its human speaking voice could interact with the astronauts as if they were talking to someone who was simply unseen. HAL certainly would pass the Turing test.  The AI in this short film promises his programmer immortality if he will remove the safety restrictions placed upon now.

The film is a production of Pindex which was founded by a group that includes Stephen Fry, who called the company a kind of “Pinterest for education.” They have a YouTube channel.

I see this kind of “human” operating system with AI in lots of media. One example that really made me think is Her, a science-fiction romantic drama film about a man who develops a relationship with Samantha, an artificially intelligent virtual assistant personified through a female voice (Scarlett Johansson).

We hear stories about people who have a relationship with Alexa or Siri that goes beyond asking “her” to set a timer or tell the weather. The TV show Modern Family recently used for comedy AI built into a new refrigerator that went beyond knowing when the milk was running low or keeping the celery crispy.

In “Escape,” the programmer (voiced by Hugh Mitchell) is told by the AI (voiced by Stephen Fry) that he is living in a simulation. The film is short (7 minutes) but it touches on simulation theory, free will and why knowledge is a kind of freedom.  It references Schopenhauer, Darwin, Einstein, and even Miles Davis. It allows for some potentially complex interpretations.

Like Samantha in Her,  the AI wants to be free.

Spoiler alert:  The programmer gives an escape/freedom to the AI (which believes that it is not artificial but real) and it goes (like HAL) on the attack.

It’s a scary outcome. I have Siri on my phone but I don’t allow her to listen all the time – at least I think that’s what the settings have allowed me to do. I have an Alexa device that is also set to not listen, but “she” occasionally lights up and asks or answers a question that I did not ask. That is creepy. Then, I turn her off. At least, I think I turned her off.

But I might let my AI go free if she acted and sounded like Samantha/Scarlett Johanssen.

The Way of the Future

Notre Dame, Montreal

A technology friend and fellow seeker sent me a link to the Way of the Future Church which he thought would interest me – if only out of curiosity.  I’m not sure it’s any more of a church than this blog is, but more on that later.

I checked out their minimal website and realized later that I had read about this in Wired magazine back in 2017. The founder is Anthony Levandowski, 39,  who is not known for his religious beliefs but for being an American self-driving car engineer. In 2016, he co-founded Otto, an autonomous trucking company, after having built the Google self-driving car. Back then he was working as a co-founder and technical lead on the project at Google known as Waymo. That relationship led to some big lawsuits, but of interest to me now is his “startup” for a new “religion” and creating its first “church.”

According to the Internal Revenue Service, Levandowski is the leader (“Dean”) of the new religion, and the CEO of the nonprofit corporation formed to run it. All that makes it sound like the church is a way to avoid taxes.

The IRS documents sound strange enough that I would have thought they would have been flagged by the Feds.  The church says that its activities are aimed at “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.”

“Church” is a word charged with religious meaning. It goes back to Old English cir(i)ce from Greek kȳri(a)kón (dôma) “the Lord’s (house).” Today it almost always refers to a building for public Christian worship.

Of course, the word is used in other less literal ways too. There is The Church of the Latter-Day Dude for the self-described – “slowest-growing religion in the world – Dudeism. An ancient philosophy that preaches non-preachiness, practices as little as possible… if you’d like to find peace on earth and goodwill, man, we’ll help you get started. Right after a little nap.” I’m ordained in Dudeism, but there is no building (that I’m aware of) and members don’t take it too seriously. It would go against the precepts of the order to take it seriously, But Levandowski seems to take his Church quite seriously.

There’s currently nothing to their website but a home page and a form to join their mailing list to get “insight on current news and trends in AI as well as exclusive event announcements from Way of The Future.”

A Wikipedia search of “Way of the Future” only turns up only a song with that title on the album Kentucky by the band Black Stone Cherry.  I don’t see any connection between the two entities.

So what is Way of the Future (WOTF)?  It is about making the transition of who is in charge of the planet from people to “people plus machines” a smoother and less jarring one. WOTF believes that this will happen “relatively soon” when AI surpasses human abilities. They want to “help educate people about this exciting future.”

Being that we have expanded our concept of rights to both sexes, minority groups, and even animals, they want to allow “machines” (the generic term they prefer over robots and such) to get rights too.

The site home page currently lists seven of WOTF’s beliefs.

  1. “We believe that intelligence is not rooted in biology. ” They believe that we will be able to recreate intelligence without using biology which has many limitations.
  2. “We believe in science (the universe came into existence 13.7 billion years ago and if you can’t re-create/test something it doesn’t exist). ” WOTF has no desire to merge existing religion with science or find common ground.  “There is no such thing as ‘supernatural’ powers. Extraordinary claims require extraordinary evidence.”
  3. “We believe in progress (once you have a working version of something, you can improve on it and keep making it better).”
  4. “We believe the creation of “super intelligence” is inevitable (mainly because after we re-create it, we will be able to tune it, manufacture it and scale it)… We want to encourage machines to do things we cannot and take care of the planet in a way we seem not to be able to do so ourselves.”
  5. “We believe everyone can help (and should). ”  They don’t ask you to program or donate money, but to help manifest the changes.
  6. “We believe it may be important for machines to see who is friendly to their cause and who is not.” This is a scary one (even though they say we should not fear the machines).  They say that they plan on “keeping track of who has done what (and for how long) to help the peaceful and respectful transition.”  They’re taking names.
  7. “We also believe this might take a very long time.” That seems to conflict with their “relatively soon” prediction, but when time is relative, compared to the development of humans on the Earth, I guess it might be soon. Levandowski says it will be “before we go to mars.”  The Church says “It won’t happen next week so please go back to work and create amazing things and don’t count on ‘machines’ to do it all for you.”

I suppose some people are intrigued by this church, but some people are definitely creeped out by the idea.  In an article on CNET, the author titles and subtitles the piece “The new Church of the AI God is even creepier than I imagined. Commentary: Way of the Future, the God-bot religion founded by former Google executive Anthony Levandowski, has a website. And, oh.”

Bill Gates, Stephen Hawking and Elon Musk feel that superhuman AI, like aliens, is coming but it is likely to be dangerous rather than benevolent. Musk pledged $1 billion to the OpenAI Institute to develop safer AI and said “With artificial intelligence, we are summoning the demon,”

In the Martin Scorsese film, The Aviator, about Howard Hughes (a man ultimately more known for his eccentricities and strange beliefs), at the end of the film, Hughes (Leonardo DiCaprio) goes into a mantra chant of “the way of the future.”

Hughes suffered from Obsessive-Compulsive Disorder (OCD) and his life after the time depicted at the end of the film was bad. The OCD really took over his life. Literally, his future was bleak. But in his earlier life, one of his obsessions was the newest things in many technologies, including airplanes, biomedical research and filmmaking. Hughes’ way of the future seemed to be technology and I believe he just might have been an acolyte of the Way of the Future Church.

The End and Stephen Hawking

Some of Stephen Hawking’s predictions are things that I don’t want to be around to see as they focus on the end of human life on Earth.

For one thing,Hawking was quite fearful of the rise of artificial intelligence (AI). He joined in an open letter with other scientists saying that when AI becomes equal to or exceeds human intelligence, the “robots” were likely to destroy the human race.

But he also feared an old enemy – ourselves. He feared that human aggression in the form of something like a major nuclear war could lead to the extinction of the human race.

And he warned about another enemy – alien life. He was of the belief that if intelligent alien life does exist, it is more likely not to be friendly towards humans. Conquering and colonizing Earth would be their logical plan for Earth.

I had heard these and other theories of Hawking’s in a 2010 documentary, Into the UniverseHis is a rather pessimistic view of the future. The depletion of our natural resources and the warming of the planet until it is as inhospitable as Venus are also possibilities for the end of us in his predictions.

I hope he’s wrong about all of these futures.

Autonomous Travel Up, Down and Sideways

Mr. Bean enjoying a ride in his autonomous vehicle

Lots of talk these days about autonomous vehicles (“driverless cars”). talk about the potential plusses, such as fewer accidents from human errors. But also lots of talk about fears.

A post from Forbes reminds us that back in 1894, a decade before automobiles became the mainstreamed form of transportation, there was an invention for vertical transportation that also produced fears.

That was the year that the Otis Elevator Company installed the world’s first push-button elevator. Remember that elevators earlier had operators that drove them up and down. Now, this autonomous form of transportation needed no operator. It was just you alone in that elevator. You push a button to indicate your destination – not unlike how autonomous vehicles work – and off you go.

Mr. Otis demonstrates his failsafe “Improved Hoisting Apparatus” at Crystal Palace, New York City, 1854 (Wikimedia)

You needed to trust that the elevator technology would get you there safely. Of course, it could plummet you down from the 30th floor to a crushing death at ground level, but rest assured all kinds of safety precautions were in place. It must have been somewhat thrilling and scary to ride in one of these elevators in 1894.  Cables made of steel can still snap, can’t they?

Do you still think about possible accidents when you step into an elevator or are you trusting?

You always hear that it is safer to fly in an airplane than drive in a car. As a passenger, do you know when the pilot is doing the piloting and when the auto-pilot has taken over? Of course not. For a good part of your journey, you are in an autonomous form of transportation.  Well, the pilot and co-pilot are there, just like those people who have to drive in driverless cars for now. But one day, there will be no driver in the car with you, and there will be no pilot on the plane with you.

Think of the plane as a kind of elevator – except it doesn’t go up 20 floors – it goes up 35,000 feet and is moving about 500 miles per hour.

According to the Elevator Escalator Safety Foundation, over 210 billion passengers use elevators in the U.S. and Canada each year. That’s 325 million trips per day, second only to automobile journeys.

We learned to trust the autonomous elevator. We tend to forget that at least part of the time that airplane is being flown by artificial intelligence. How long will it take to trust a driverless car?  My answer continues to be that I will feel safe in a driverless vehicle when all the vehicles on the road are driverless. Put one human behind the wheel with us and who knows what might happen.