That Chatbot Didn’t Actually Pass the Turing Test, But We Should Probably Still Chat About It

Photo: TriStar Pictures

The news that a chatbot “named” Eugene Goostman passed the so-called Turing Test over the weekend has churned up a fair amount of the standard kvetching about where humans fit into an increasingly astounding technological world — even if most of it’s being presented as nervous jokes about Skynet or whatever. It looks like Goostman didn’t actually pass the test by any reasonable standard, despite countless credulous media reports otherwise, but this is a useful let’s-get-our-bearings moment about what it means that computers are doing scarily impressive things. We should have a talk, humans.

Part of the problem is that we’re inching up to real-world versions of conversations that have previously been the purview only of stoned college freshmen and, even worse, philosophers (kidding! that was my major in college, and I spent many an endless hour trying to fully grok Searle’s Chinese room argument). For a long time, one could only imagine what it would be like for computers to convincingly ape our capabilities. Even though Goostman actually doesn’t perform all that well under real-world pressure, it seems pretty likely that a “Turing Test Passed!” story pass the smell test sooner rather than later. When it does, everyone will freak out and wonder if computers have finally started to outthink their masters in a more human — and therefore unnerving — way than “merely” being able to beat us at chess or Jeopardy!

But even when this does happen, it won’t be clear exactly how to interpret it. There’s a roiling, long-standing debate over what it means to think among those who are paid to … well, think about this stuff. I’m pretty sure Eugene Goostman isn’t “thinking” in any meaningful sense — “he” is a bunch of code designed to pretend to be a 13-year-old boy. This argument breaks down pretty quickly, though — for one thing, what if Eugene were a lot more convincing? What if he were powered by gigabyte piled upon gigabyte of impressive AI that made him seem unnervingly like a real silicon boy? Would he be thinking then? I still wouldn’t think so. But why not? Our brains are “just” synapses, after all — a couple of them in isolation surely aren’t “thinking,” but you stack enough of them in one place, add some neurological scaffolding, and BAM — thinking! Is it chauvinistic to assume only a being with wetware similar to ours can “really” be thinking?

Even though I have barely flicked the tip of the tip of this conversation’s iceberg, you may be half asleep by now, and that’s sort of the point — this is where the stoners-and-college-profs-only part comes in. Much of the coverage of Goostman noted, without a second thought, that the Turing Test is a test of whether a machine can “think,” but failed to include any of the nuance that’s a required part of this conversation. There’s little widespread, mainstream discussion about machine intelligence and what it will mean when the Turing Test and other milestones are passed. That’s because people have taxes to do, kids to pick up from school, and a million other tasks that seem more practical (or less boring) than wading into deeply philosophical debates. But it’s getting to the point where it will be important for the average, otherwise-uninterested-in-nerdery person to at least know the basic contours of the debate, to at least have a sense of what’s at stake.

We Should Chat About That Chatbot