The fall of Secret, the anonymous social-media app, was a swift one. Ten months ago, it had received $25 million in funding (the company raised $35 million overall) at a valuation of $100 million. But two weeks ago, as Forbes reported, it announced it was shutting down — a surprising outcome for an app with some lofty-sounding goals. “Our vision is to create a world flowing with authenticity,” Secret co-founder David Byttow wrote in a March 2014 Medium blog post. “Being more open with each other brings us closer together, builds understanding and ultimately makes the world a better place.”
There’s nothing unusual about a company failing, of course. But in light of the failure of Secret and other big-name start-ups, as well as the insane sums of money being thrown around in the tech world more generally, it’s worth asking what effect high-tech concepts have on the human capacity to make decisions, to coolly evaluate from among a series of options. What is it about claims couched in the language of technology that convinces consumers, long before a product is even available, that it will change their lives, that leads to billion-dollar valuations for companies without clear paths to sustainable profitability? Stated more simply: Is there something about high-tech promises that turns people into rubes?
A study recently published in the Journal of Business and Psychology suggests there is. The research, led by Brent B. Clark of the University of South Dakota, consisted of a series of experiments conducted on a group of college students — two of them involving investment decisions with hypothetical sums of money: In one, the undergrads decided how much to invest in a certificate of deposit versus an industry stock; in another, they played the role of a mining-company executive deciding how much of the rare-earth mineral thorium to harvest given uncertainty about its future availability.
Throughout the experiments, Clark and his colleagues carefully controlled the information provided to to the participants — the past performance of a stock and the percentage likelihood that more thorium would be available in the future, for example — in an attempt to isolate the variable they were actually interested in: whether the mere use of high-tech language would lead the undergrads to view certain prospects more optimistically and to invest in them more aggressively, especially when the technology in question was unfamiliar.
So, for example, some would-be mine executives were presented with the possibility that thorium yields might increase because of “new solar technology” (familiar tech), others because of “swarm robotics technology” (less familiar tech), and yet others because of “random fluctuations in the mining process” (non-tech).
Overall, in the cases where the participants were told that a given decision had had good results in the past, the participants were likely to make that decision — but only if it was associated with some high-tech verbiage (there was no statistically significant difference between the high- and low-tech solution when they weren’t reported to have had success in the past). The researchers called this bias “the technology effect” — “Signals of high performance trigger the effect,” they write, and it “is more likely when the technology invoked is unfamiliar.” The equation was simple: “Evidence” of past performance plus a high-tech premise equaled investor exuberance.
It’s important to remember that this exuberance was irrational: The experiments were built so that there were no substantive reasons students should have preferred high-tech solutions to non-high-tech ones, given the information available. So while a student sample is a limited sample — more soon on why that’s a particularly pressing concern in this case — these results suggest that the concept of technology activates something in our brains that does, in fact, suffuse us with optimism, affecting our decision-making.
The tech effect “has a lot to do with the notion we’ve constructed in our minds” of technology, Clark told Science of Us. He thinks that in the modern world, a strong psychological association between new technology and success has taken hold. As he and his colleagues explain in the paper, if this theory is true, there’s a good reason for it: The world is replete with high-profile examples of runaway tech successes. Statistically, Google and Facebook and Apple are outliers — the vast majority of new tech companies fail, and fail fairly quickly — but one simply doesn’t hear about all the busts nearly as frequently as one hears about young newly minted start-up billionaires. Researchers call this the availability bias: When an example of something (high-tech success stories) comes easily to mind, it skews our estimate of the likelihood of that sort of event.
“Hype can be fun, but we have numbers, we have data, and we can make decisions based on facts,” said Clark. “The people in our studies clearly did not do that, they consistently did not do that. We told them over and over throughout the studies we ran, we told them this in several different kinds of ways — that there was no option between Option A and Option B. People just liked the way we labeled one of the options better.”
If the tech effect exists outside of the lab, it could have ramifications all over the place. It could help explain not just why Company A gets more funding than Company B, but why consumers fall for shoddy products or scams, why governments contract services out to unproven companies that end up bungling the job. No one’s suggesting the tech effect can fully explain this behavior, since so many other factors go into these decisions — consumers buy products because their friends like them; angel investors put money down when they know and like a start-up’s founder — but even if the tech effect moves the needle just a little, the society-wide impact of this bias could be huge.
That’s why Clark is eager to better understand how the tech effect works at the institutional level. “Corporations and politicians and regimes and governments, they’re different [from individuals] — they can do their homework,” he said. “They can avoid these kinds of decision-making flaws. But will they actually do it? Maybe, maybe not.”
To find out, Clark and his team — or other groups — will have to conduct more studies. They note in their paper, for example, that research involving “entrepreneurs, medical professionals, patients, lawmakers, and scientists … might help provide even greater definition of the scope and boundaries of decision making biases arising from the technology effect.” Undergrads can’t answer the question of whether and to what extent major decision-makers are swayed by the tech effect.
Despite the many negative ramifications of this bias, Clark emphasized that he doesn’t view the tech effect as entirely a bad thing. “It’s really easy to focus on the snake-oil salesman version of the phenomenon,” he said. “But we also perceive that here’s some serious upside to this. Because you end up having situations like Thomas Edison putting so much effort into creating the lightbulb, failing dozens and dozens of times before it succeeds — and that’s one of the standard examples of persistence in the face of adversity.” So the upside to all the potential for pseudoscience is faith in the belief that technology, in certain situations, can actually solve fundamental human problems — faith that sometimes actually pays off.
“We have this mind-set that technology’s going to work, and whether it’s true or not, that mind-set can have some really amazingly positive outcomes for society, even if the average inventor or the average investor or the average technology consumer … are worse off because of the tech effect,” Clark said. And on “the macro level, I think we benefit from this.”