There’s an unfortunate divide when it comes to the diversity trainings many companies, schools, and other institutions adopt to try to make themselves more welcoming to underrepresented and marginalized groups. While they may be well-intentioned, there’s precious little evidence any of them work. Over and over, researchers have noted that few diversity trainings are backed by any empirical evidence they meaningfully improve outcomes — meaning hiring ratios, office climate as reported by employees, and so on — whatsoever.
But some researchers are trying to do better by building more carefully designed approaches that incorporate accepted theories about how prejudice and bias operate. One promising example is called the “prejudice habit-breaking intervention,” and according to a new paper, it may have had an exciting impact on the way some traditionally male-dominated academic departments in the University of Wisconsin system hired faculty.
The paper, lead-authored by Patricia G. Devine, a psychologist at UW-Madison, and set to appear in the Journal of Experimental Social Psychology, focused on a group of science, technology, engineering, math, and medical (STEMM) departments at UW-Madison (she co-conceived of the study ideas with Patrick Forscher, a University of Arkansas psychologist who has contributed to some important work highlighting problems with the implicit association test). These areas of academia are fairly infamous for their gender-equity problems, and in some STEMM departments, sexual harassment and gender-based exclusion are rampant.
To oversimplify the study’s setup just a little, the members of half the departments were given a 2.5-hour prejudice habit-breaking workshop, and the members of the other half of the departments, the control group, were not.
What does a prejudice habit-breaking workshop look like? Devine and her co-authors explain that the underlying theory is that acting in an unintentionally biased manner is a habit like any other, and therefore “breaking [that] habit involves (1) becoming aware of when one is vulnerable to unintentional bias, (2) understanding the consequences of unintentional bias, and (3) learning and practicing effective strategies to reduce the impact of unintentional bias.” So the workshop offered participants a mix of education about how this sort of habit operates and tools for how to fight it. For example, the participants “learned how unintentional bias functions like habits, leading people to often respond in ways that contradict egalitarian values.” They also learned about some of the specific ways gender bias manifests itself in STEMM fields, and read some case studies. Perhaps most importantly, attendees learned five specific techniques that can, the thinking goes, help people break the bias habit — stereotype replacement, counter-stereotypic imaging, individuation, perspective taking, and increasing opportunities for intergroup contact (there are some brief summaries of these and other interventions here, as well as citations to further research for folks who want to learn more).
To measure the workshop’s efficacy, Devine and her colleagues examined the gender ratio in faculty hiring during the two years before and after the workshop was given. If the intervention really helped break the bias habit, then that might show up in the form of some departments hiring more women. And that’s what appears to have happened: During the two years prior to the workshop, the control and workshop departments hired women at about the same rate — 33 percent and 32 percent, respectively. In the two years following the intervention, though, a gap opened up: In departments that took part in the intervention, new faculty hires were 47 percent women, while the control group remained stuck at about a third. (Past research had already shown the workshop was effective at increasing awareness of gender-equity issues and making women faculty feel more welcome in their departments.)
For somewhat complex reasons, this result didn’t quite meet the traditional threshold for what researchers consider statistically significant — basically, the researchers could only evaluate a relatively small number of departments (by statistical standards), which put a “hard upper limit,” as they put it, on how statistically significant their results could be. But in the full context of the study, that is still an impressive jump, albeit one that would be nice to replicate in future research.
Whether or not these sorts of workshops prove successful in the long run, this is a good roadmap for figuring out how to craft more effective diversity programs. These areas are too important for employers and college administrators to be forced to rely on overhyped guesswork.