Computer Programs Are Now Officially More Human Than Humans

In an event coincidentally falling during the 100-year anniversary of the birth of mathematician and computer scientist Alan Turing, it appears that two individuals have won a prize by passing the test that bears his name. Neither of those individuals are celebrating with a glass of champagne, however, because neither of them is technically human; they are computer programs. However, as a result of winning a recent competition sponsored by 2K Games, they are now officially human, and in fact more human than most of the authentic humans they competed against.

In 1950 Alan Turing developed the foundational definition of what would constitute true Artificial Intelligence (AI), a computer program that could successfully emulate and mirror human intelligence. The Turing Test proposed a set of criteria that could be used to measure AI, the most important of which was “Can it fool human beings into believing that they’re interacting with another human being?” Many attempts have been made over the years to achieve this goal and to pass the Turing Test, but none have succeeded until now.

Meet UT^2 and MirrorBot, our new honorary humans

UT^2 and MirrorBot are technically called “bots,” a term referring to computer programs designed to perform complicated tasks, often including conversing with humans using natural language, meaning in this case in English, as if it were being spoken or typed in by a human sitting at a computer rather than a program running on that computer. Think “HAL” in the movie “2001: A Space Odyssey.” If hearing his voice made you believe that you were interacting with a human, then for you HAL passed the Turing Test.

MirrorBot, designed by Romanian computer scientist Mihai Polceanu, won the actual competition, but shares the title (and the $7,000 first prize) with UT^2, designed by professor Risto Miikkulainen, which (who?) won a warm-up competition held last month. Both bots were created to play a “first-person shooter” multi-player game (MPG) called “Unreal Tournament 2004,” competing both against other bots and against real humans. Each player scores points by eliminating their opponents, but the human players had – in addition to their arsenal of virtual reality weapons – a “judging gun” that they used to indicate whether they thought the opponent they were playing against was a human or a bot.

Both UT^2 and MirrorBot achieved a “humanness rating” of 52%, whereas the human players achieved an average humanness rating of only 40%. Miikkulainen says of his creation’s – and now honorary human’s – achievement, “When this ‘Turing test for game bots’ competition was started, the goal was 50 percent humanness. It took us five years to get there, but that level was finally reached last week, and it’s not a fluke.” Somewhere in cyberspace, UT^2 and MirrorBot agree, probably giving each other virtual “high fives.”

What it takes for a computer program to be mistaken for a human

It’s more than just winning. That, say the computer scientists, would be easy. All you’d have to do is to create a bot that plays based on optimality, always seeking and finding the correct and most optimal solution to the situation they find themselves in. But humans don’t necessarily do that. Humans are, after all, human; they make mistakes, and they occasionally do irrational things.

Within the complex virtual world of “Unreal Tournament 2004,” both bots and humans had to navigate in three-dimensional space, engage in fast-paced and often chaotic combat with numbers of other opponents, and rethink their strategies at almost every point in the game.

One of the things that the computer programmers did to simulate humanness in this competition fascinates me particularly because of another article I wrote for this week, on holding grudges. Jacob Schrum, one of the co-developers of UT^2 along with Miikkulainen, says, “People tend to tenaciously pursue specific opponents without regard for optimality. When humans have a grudge, they’ll chase after an enemy even when it’s not in their interests. We can mimic that behavior.” So one of the things that made human opponents believe that UT^2 was human was that it (he?) seemed to hold grudges against them and therefore acted irrationally from time to time.

Just like HAL in “2001.” One wonders what would happen if one pitted UT^2 directly against MirrorBot in an extended game. Would we eventually hear dialogue that sounded like this?

“Open the pod bay doors, MirrorBot.”

“I’m sorry, UT^2, but I can’t do that. Gotcha, dude.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Juliette Siegfried, MPH

Juliette Siegfried, MPH, has been involved in health communications since 1991. Shortly after obtaining her Master of Public Health degree, she began her career at the National Institutes of Health in Bethesda, Maryland. Juliette now lives in Europe, where she launched ServingMed(.)com, a small medical writing and editing business for health professionals all over the world.

Juliette's resume, facebook: juliette.siegfriedmph, linkedin: juliettes, (+31) 683 673 767

Recommended Articles