Assumptions matter. And in psychology, cognitive science, and their siblings, the leading assumption about the nature of the mind is that it’s “computational”: basically, the brain is hardware, the mind is software. Ask futurist Ray Kurzweil, who has purported to describe “the basic algorithm of the neocortex” in his book How to Create a Mind: The Secret of Human Thought Revealed. This sort of thinking — that your mind is an operating system developed by evolution — is what allows for professional prognosticators to consider “mind uploading” to be A Thing That Will Happen, wherein you transfer your mind to the cloud, giving you a sort of virtual immortality. This was a plot device for at least one bad Johnny Depp movie, Transcendence, currently sitting at a wince-inducing 20 percent on Rotten Tomatoes.
In a new essay for Aeon, American Institute for Behavioral Research and Technology research psychologist Robert Epstein spots how computationalism — which he refers to as an “information-processing” view — is all over the way psychology talks about people’s interior lives. “We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device,” he writes. “We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.” This critique is important because computationalism is the prevailing model for the mind: Thinkers as eminent as star cognitive scientist Steven Pinker hold it to be true. A sort of naive computationalism is at work in a lot of popular perceptions of the mind, like the assumption that Vulcan-style, computerlike logic is the most responsible way to make a decision (even though, as University of Southern California neuroscientist Antonio Damasio has found, you’re apt to lose your decision-making ability if disease or accident impairs the emotional centers of your brain). It’s also animating the “time-macho culture” of business that assumes your brain, like your computer, can concentrate for ten hours straight without break. (Your MacBook is definitely the kind of guy who always eats lunch at his desk.)
But maybe, so the critique goes, computationalism is just getting caught up in its own metaphor: Intellectual history shows that since it’s hard to understand the brain, we’re tempted to say it’s whatever the hottest new thing in technology is. As philosopher John Searle notes in his take on metaphor, people have thought the brain was a telephone switchboard (including neuroscientist Charles Sherrington, who won a Nobel prize for his work on neurons); a hydraulic system (Sigmund Freud); a mill (Gottfried Wilhelm Leibniz), a blank slate (John Locke); or a wax block (Plato). “Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer,” Epstein says. Yet a brain, the miraculous mushy gray matter that it is, is not a switchboard or hydraulic pump or mill or slate or computer. A brain is a brain — inside of a body. And it’s within that body, and with that body, that a brain (and its human) navigate the world.
One of the canonical arguments critiquing computationalism is called the Outfielder Problem, first proposed in 1995 by psychologist Michael McBeath, now at Arizona State University. Basically, if the mind were a computer, then when bat met ball, the brain of the outfielder — let’s say, Mets left-fielder Yoenis Céspedes — would calculate where the ball is going with an “internal simulation of the physics,” taking in the initial angle of trajectory, the force of impact, and all that. Then, with that internal model, the outfielder would run to the ball’s destination in a straight line, since that’s the shortest distance between two objects.
But, in reality, outfielders have bodies, and they use those bodies to figure out where the ball is going. Andrew Wilson, a psychologist who studies radical embodied cognition, gives an account:
If the outfielder simply stands and watches the ball, they’ll see it fly up in the air along a curved path, slowing down due to gravity, reaching a peak height and then speeding up again as it falls to earth. If, however, they start to move, then what they’ll see is a mix of their motion and the ball’s motion. If, for example, the outfielder was to run in a curved path that mirrors the curved path of the ball, the motions would cancel out and the ball would look as if it were tracing out a straight line. The same is true with the speed—if the outfielder were to speed up and then slow down at just the right times, the ball would look as if it was moving at a constant velocity.
And if the outfielder moves in such a way that the ball looks like it’s moving at a constant velocity, Cespedes, in this case, will end up in the right place, at just right time, to make the catch (unless he runs into a wall). That’s a side effect, Wilson says, of the ball’s trajectory being a parabola. And the outfielder runs not in a straight line, but a curved path. Fascinatingly, dogs use the same strategy when they’re scampering after a long-distance Frisbee toss. Best of all, Epstein observes in the essay at Aeon, this model is “completely free of computations, representations, and algorithms.” It’s just a (very perceptive) brain in a (very fit) body in a (very high-pressure) environment.
The below GIF shows that tracking in action, but not to completion:
The Outfielder Problem is not yet “solved”: If you walk into your nearest philosophy or psychology department, there’s a good chance that bringing it up could cause a ruckus. While it’s still mainly an argument, the actual experience of how baseball players intercept fly balls is beginning to fall under empirical testing, like a 2009 paper that used virtual reality to measure the computational, trajectory-predicting model versus a movement-based, embodied model. The result: Instead of calculating a ball’s trajectory, it’s better to run after it — and use your spatial awareness along the way to find out where it will go.