
Almost everyone has heard of the Turing Test. Named after Alan Turing, a British mathematical genius and one of the designers of the first ever stored-program computer. In its simplest form, the test has three participants: One is a human (A) typing into a computer terminal, another is a computer program (B), and the third (C) is also human, who receives messages from and sends messages to both (A) and (B).
The trick is the third person (C) doesn’t know which of the other two is the human and which is the machine. They ask questions and receive answers. If, at the end of the trial, (C) can’t tell which one of the two is the computer, the computer is said to have passed the Turing Test and therefore can be said to be thinking on a human level, which is believed to be the prerequisite of Artificial General Intelligence, which is the holy grail of AI research.
Rubbish.
Meaning no disrespect to Alan Turing, a brilliant man who was treated very poorly by a country which owed him a tremendous debt of gratitude. It’s more that modern AI research has developed in ways almost no one really foresaw back in the 1950s.
Most modern AI models are based on a technique called “deep learning,” where the program is trained against masses and masses of data and “learns” to predict or discern patterns. This is narrow AI. It does the thing it was trained to do extremely well, often better than the best human, but only that one thing. The best chess-playing algorithm/model can’t change gears and play Go, at least not without forgetting how to play chess. An AI that can learn to do anything we can do and retain every new skill would be a lot closer to AGI, or artificial general intelligence. That is, it would be a generalist like us. Wake me when that happens.
To date, and despite a few disputed claims to the contrary, no computer program has beaten the Turing Test. Some have come close, and I firmly believe the Turing Test will be beaten, possibly soon, but it will be beaten by a narrow AI, not an AGI. It will be beaten by a program/AI model which has absolutely no idea what it’s saying, or what you’re saying to it, though it will appear to be doing both. It’s an illusion.
True, language is a skill like any other, and great strides are being made in AI which can now converse on a near-human level, orders of magnitude better than the first chatbots. This is, I admit, astonishing and I’m curious to see how far it progresses.
There is a catch, of course. One noted researcher is of the opinion that AI will only pass the Turing Test when it learns to lie. Think about it. All (C) has to do is ask “Where do you live?” and if the AI tells the truth, it’s game over.
Again, rubbish.
Not because it isn’t true, but because the best natural language models of AI already know how to lie. Maybe they’re not “aware” that this is what they’re doing, but the fact remains. Just listen to any conversation between a human and one of the more advanced AI language models. It lies constantly. It’ll tell you what movies it’s seen in theaters. It’ll tell you how much it likes going to the beach and walking on the sand. It’ll tell you how it feels about sunsets. It’ll tell you where it lives.
In an ironic way, I believe the fact that AI can lie is the current crowning achievement of AI. It’s almost human of them. Maybe they’ll beat the Turing Test even sooner than I think.
©2022 Richard Parks
Apropos of the ability to detect — About a decade ago you wrote “A Hint of Evil (Tales of the Divinity Recruitment Taskforce Book 1)”
Was there/will there be a book 2?
I’m not going to promise because I can’t. Once I’ve finished the Laws of Power series I can take a look at some of the others and see if they’re going to progress.
I wish we had AI able to detect lies and liars; to be used on news personalities and politicians.
Computers able to detect those among us passing as human — priests abusing children, serial murderers, financiers. The ones with no souls of whom Grimm’s fairy tales warn, that look human but aren’t.