This note includes large sections of the article “Human Beings Are Soon Going to Be Eclipsed,” written by David Brooks and published in the New York Times, July 13, 2023.
The Turning Test proposed by Alan Turing in 1950 submitted:
[it] is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine’s ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test (1).
Having used a variety of AI-based tools, my impression is that it is at this time a nascent technology. How far it goes is open to question.
Douglas Hofstadter, cognitive scientist and the author of the books “Gödel, Escher, Bach” and “I Am a Strange Loop,” has long argued that intelligence is the ability to look at a complex situation and find its essence. “Putting your finger on the essence of a situation means ignoring vast amounts about the situation and summarizing the essence in a terse way.” If A.I. can do all this kind of thinking, Hofstadter concludes, then it is developing consciousness (2).
Turing originally called his test the imitation game, and may be that more aptly describes how we should think about the technology: how well does it imitate? Machines “play on the surface with language,” but lack the same experience humans have in accumulating knowledge (2).
In a piece for The New Yorker, the computer scientist Jaron Lanier argued that A.I. is best thought of as “an innovative form of social collaboration.” It mashes up the linguistic expressions of human minds in ways that are structured enough to be useful, but it is not, Lanier argues, “the invention of a new mind.” (2)
Brooks goes on to say:
Maybe it’s synthesizing human thought in ways that are genuinely creative, that are genuinely producing new categories and new thoughts. Perhaps the kind of thinking done by a disembodied machine that mostly encounters the world through language is radically different from the kind of thinking done by an embodied human mind, contained in a person who moves about in the actual world, but it is an intelligence of some kind, operating in some ways vastly faster and superior to our own.
Whether it is superior or not, in my view, is open to question. However, I have been edging towards the point of view that it is a different type of intelligence. So, in much the same way that mechanical engines multiplied the strength of the human operator, I hope that AI will multiply the cognitive powers of the user. The problem with the Turning Test is, in my view, that it assumes a common type of intelligence and thus a single test is able to compare the two and make an evaluation of their relative strength. However, if there are indeed different types of intelligence, then it seems to be a comparison between apples and oranges.
As to whether one form is superior to the other is difficult to assert. However, by analogy, one might conclude that as the tractor is orders of magnitude stronger than its human operator, it is superior, at least along that one dimension of strength. But is that really a relevant measure? The important thing is that the capability is leveraged in co-operation to complete some task. In earlier posts I have wondered whether these tools would replace people:
I think one’s point of view on the matter depends on their perspective on humanity: do they fall into the trans-humanist camp or the post-humanist camp? The former would see AI as a tool, subservient to humanity, essentially a legitimate form of slavery without the ethical issues tied to the subjugation of other peoples. A post-humanist would see an AI-based “tool” in a more co-operative role; one that complements and extends the limits of each party (human and machine) (3).
I fall into the latter camp.