People can be spectacularly bad at working together. Consider the traffic jam: Vehicles on a highway flow best when drivers allow plenty of space between cars, but drivers rarely do this. They follow too closely, trying to take every advantage they can, so that one driver tapping his brakes causes an immediate traffic snarl.
In the not-too-distant future, collective-action problems like this might be prevented when all cars are self-driving and networked together, coordinated by supersmart artificial intelligence that can optimize the whole system at once. But in the even-nearer future, mixing robots in with a group of humans — having a combination of autonomous and people-driven cars on a highway, for example — may help people work together to solve problems more efficiently.
That’s the conclusion reached by new research by Yale sociologist Nicholas Christakis. In an article recently published in the journal Nature, Christakis and Hirokazu Shirado, a Ph.D. candidate in sociology at Yale, found that even dumb robots can help humans help themselves, and for a surprising reason: They can prompt humans to be less rigid and more experimental.
Christakis and Shirado designed an experiment in which groups of human participants worked together on an online color-coordination game. Each person could control the color of a single node on a network, with the ability to change the node between three different colors. Players could see the colors of the nodes they were connected to, but they couldn’t see the whole network, and they couldn’t communicate with each other.
The groups were given a goal of collectively making a system in which no node was the same color as its neighbors. If the participants could do that within a time limit of a few minutes, everyone was paid a few dollars.
What often happened, Christakis said, was that participants chose a color different from that of their immediate neighbors, and then became frustrated that the group’s problem wasn’t solved. “Every human would pick a color that would minimize conflict compared to its neighbor, and everyone was very smug and would say, ‘I’ve done my job,’” he said. But just like a traffic jam, people didn’t realize that they were the problem: Even though they were making the right choices for themselves, their choices were preventing the network from reaching a collective solution.
Then the researchers began adding some “dumb AI” bots to the game — not incredibly intelligent problem-solving supercomputer participants, but bots that were just programmed to behave randomly. What resulted was somewhat unexpected, a kind of role reversal between humans and computers.
By introducing a little “noise” into the system, the bots prompted humans to make color choices that seemed wrong, but actually helped the group reach a solution faster. “The humans became noisier as well,” Christakis said. When they saw that their bot neighbors were more likely to pick colors that temporarily increased conflicts, but that solutions followed, the human players became more experimental in color choices, too, which led to quicker solutions.
Christakis said this may have implications for all kinds of problems that require groups of people to work together. For example, a big car company might have different departments — engineering, legal, marketing, management — that are all doing good work individually, but the company is still failing because they’re not cooperating systemwide. Introducing a little artificial “noise” into this kind of system could provoke, say, the engineering department to be more experimental, to try something that doesn’t necessarily make the cars run better, but allows marketing to sell them easier — more cupholders, for example.
“I don’t want to say that people should become erratic all of a sudden,” Christakis cautioned. But dumb AI can help people avoid “groupthink”: If highways are soon occupied by a mix of self-driving and human-driven cars, people may want to program the cars to behave just slightly erratically, maybe by varying their speed occasionally, he said. This would make the humans better drivers, more alert and responsive.
Christakis said he’s heartened also that his experiment seems to highlight the value of low-intelligence robots, thus maybe helping to forestall a future in which computers become self-aware and rise up to become our robot overlords, à la the Terminator movies. There’s a case to be made for bumbling-but-helpful droids like C-3PO, he noted, versus omniscient-but-malevolent AI systems like Skynet.
“I’d rather live in a world with dumb AI that’s helping humans than smart AI that’s replacing humans,” he said.