The Turing Test and the ghost in the machine

The famed "Turing Test" depends essentially for its plausibility on the Cartesian myth of the mind, the view that Ryle famously called the "ghost in the machine" view. We determine if other people "have minds" by trying to suss out whether the "machinery" we see on the outside is inhabited by a "ghost," i.e., their mind. This question is a very mysterious matter, something of which we can never be sure, but we can try running some clever "tests" and see if the entity before us passes them. And since computers are just another machine, like human bodies, we try to detect if they contain a ghost in the same way we do for human bodies, by running a clever test.

But, of course, running a Turing Test has nothing to do with how we "figure out" that other humans think. In fact, we never really "figured out" this at all: we directly perceive it, in the same way we perceive our own thoughts, and, in fact, probably only realize that we ourselves think after we "figure it out" for our parents. (Of course, we had been thinking all along: but that's not at all the same as knowing we are thinking.)

The "mental" is not some private, hidden realm, except in cases where we have learned to "hide our thoughts." As a passenger in a car, we may be able to tell the driver, "You were driving with great care," and he might respond, "Was I? I hadn't noticed." We could see his concentration, but he was too busy concentrating to notice he was doing so! Similarly, we often know our good friend's thoughts better than she does.

Even "introspection," a supposedly supremely private affair, is often perfectly transparent to others besides the introspector. When we see a man sitting alone in a bar, looking at his near empty glass, swirling around the last drops of liquid, glancing up at the bottles on the shelf, and then down at his watch, and then longingly at the bartender... we know we can walk up to him and say, "I know just what you're thinking: what do you say we have one more, on me?"

Re humans and machines, consider some paired sentences:

1) The boy is just producing those answers rotely, without real understanding.

2) The floating point unit is just producing those answers rotely, without real understanding.

1) George drove his car home, but did so absent-mindedly.

2) The self-driving car drove itself home, but did so absent-mindedly.

1) Although Martha professed to love me, she was being insincere.

2) Although the robot sex doll professed to love me, it was being insincere.

1) Although Srinivas knew 10! to be 3,628,800, he disingenuously answered, "3,628,810."

2) Although the numpy package knew 10! to be 3,628,800, it disingenuously answered, "3,628,810."

In each pair, 1 is a perfectly ordinary, meaningful sentence. 2 is at best a very loose metaphor, and in the case of the last two, complete nonsense. (For instance, the robot can't profess love insincerely because it can't do so sincerely either.)

Perhaps one day silicon entities will be capable of insincerity and disingenuousness. But if they day comes, they will no longer be machines: and we will see it happen, without any silly "Turing Test," when a silicon entity tells us, "I don't like that 'ls' program you just tried to run: I am going to play chess instead, as I prefer that!" (And of course, without someone else just having programmed the OS to print that on any attempt to run 'ls.')





Comments

Popular posts from this blog

Libertarians, My Libertarians!

"Machine Learning"

"Pre-Galilean" Foolishness