Few can speak more authoritatively to the subject of racial bias than Stanford psychologist Jennifer Eberhardt. In her 2019 book Biased, the MacArthur genius unpacked decades of research, some performed by herself and her colleagues, that helps explain how bias operates powerfully, but sometimes subconsciously, in the brain.
GEN caught up with Eberhardt to talk about how the subject of her book is playing out in the summer of George Floyd’s killing, how her work with police departments has helped decrease bias in arbitrary stops, and how we should talk about race with children.
GEN: Your book Biased explores the science of how bias works, often on an unconscious level. To what extent can a conscious exploration of our bias change our unconscious bias?
Reflection is a powerful tool. If we’re not believing that it’s something that’s out there to be reckoned with, in ourselves or in our world, we can let things go unchecked.
I often tell a story about my son at five years old on an airplane being worried that the only Black man on the plane was there to rob it. My response to that as a mother was to ask him why would he think that, to push him to interrogate his own mind. His response to me was like, “I don’t know why I said that. I don’t know why I was thinking that.”
We need to practice at reflecting on our own beliefs and our own thoughts. I think through that reflection we can have some power over it. In that case with my son, I was pushing him to do that in my presence. But the hope is that he would interrogate his own mind when I’m not there, too, that as he’s going through his day, as these thoughts enter his mind, he can actually grapple with them, raise them to consciousness so he can address them.
How important is childhood in forming or challenging bias?
I think a lot of parents feel like the way to raise a child and protect that child from bias is to have this colorblind approach. They kind of push them to not see color because if you don’t see color you can’t be biased. But the research shows that that’s not the best approach.
They’ve done studies with fourth and fifth graders where they talk to them about either being colorblind or being color-conscious, where you’re valuing diversity and so forth. They found for the vast majority of those who were taught to value diversity, if they were given an instance of discrimination, they would be able to identify that and point that out. But for the children who were told that the way to be a good person is to be colorblind, only half of them actually could identify this instance of blatant discrimination. And even as they described the incident to their teachers, those kids who were colorblind did so in a way where the teachers thought that the incident wasn’t a big deal and they didn’t identify it as blatant discrimination either, as told through the child.
So when we try to take the colorblind approach, we find that not only are they attempting not to see color, but they actually don’t see the discrimination that comes from that color. There’s a way in which targets of bias are left more vulnerable in that situation. We want to shield our kids — our intuition is to protect them by not saying too much. But I feel like we need to give them more credit.
“We are all vulnerable to bias, but we’re not acting on bias all the time.”
Books that address racism have surged back onto bestseller lists this summer. In some cases, there’s been a nuanced backlash. White Fragility had this huge surge and then people started saying that its emphasis on the absolute inescapability of white supremacy can almost make it seem as though Black people have no power at all. How do you reconcile the inescapability of bias with Black power, Black agency?
I don’t think it’s inescapable actually. We are all vulnerable to bias, but we’re not acting on bias all the time. As researchers, we know that there are certain situations that will kind of bring bias online, where it’s affecting our decision-making and our actions. And there are other situations that can mitigate it. So I think the more we know about that, the more power we have over it.
Even in policing — my colleagues and I have worked with the members of the police department here in Oakland to reduce the number of stops they made of people who were not committing any serious crime. We did this by just having officers ask themselves this additional question before each and every stop they made: “Is this stop intelligence-led? Yes or no.” In other words, “Did I have prior information that tied this specific person to a particular crime?” And we found that just having officers kind of stop and ask themselves that question, that led to a drop, a massive drop in the stops of African Americans in particular — those stops dropped by over 43%.
There wasn’t the surge in crime like many people would have believed in the department. Actually, the crime rate continued to fall, even though they were stopping fewer African Americans. And so stopping fewer African Americans didn’t lead the city to be more dangerous. It led it to be more safe for everyone.
So for me, that’s an example of the power of mindset and the power that we have to actually arrest bias and to sideline it. We can ask ourselves these questions too, right? You don’t have to be police officers to do that. We can slow down, we can replace our intuition with intelligence, we can hold ourselves accountable.
That example you just gave would seem to counter one of the arguments we’ve heard this summer in favor of defunding police, that anti-bias training doesn’t work. What has been your response to that conversation as someone who has done anti-bias trainings with police forces?
Yeah, these trainings have been delivered across the country in police departments, but we don’t know that much about their effectiveness. Actually, there are lots of different types of trainings — there’s a big business now with lots of consultants across the country. But the problem is that the trainings aren’t typically evaluated, and so we know little about their effectiveness. And oftentimes, to the extent that the trainings are evaluated at all, it’s [just asking the officers], “Did you like the training?” So it’s not the metric that really matters here.
You also work with Nextdoor, the neighborhood-based social media app, to help curtail racial profiling. How does that work?
The co-founder of Nextdoor at the time, Sarah Leary, reached out to me and other researchers to try to figure out how to curb racial profiling on their platform. In a typical case what they were finding was somebody would look out their window and see a Black man in an otherwise white neighborhood and get suspicious, and kind of shout out to all the neighbors that there’s this person here. Sometimes they would even call the police, even when there was no evidence of wrongdoing at all. And so they’re trying to figure out, well, how do we curb this?
They came to this conclusion that they were going to have to add friction, to slow people down. They did that by adding these questions that people had to answer before they could post on suspicious activity on the platform: “What is this person doing that’s making him look suspicious?” The suspicion couldn’t be simply based on the fact that he was a Black man. They had a number of questions like this where it produced this pause, this rethinking. Using that technique, they reduced racial profiling by 75%.
So that’s an example of how even outside of the world of policing, our institutions can get a handle on this. And that’s what you want, right? We want our institutions to take responsibility for the products that they make.
What other roles can technology play to combat bias?
There’s a lot of talk in policing about whether body-worn cameras actually work. This issue came up around the killing of George Floyd because there were cameras present and that didn’t seem to stop what the officers did there. But if we look across police departments in general, we find that when those cameras are introduced, it’s associated with a drop in a use of force, and it’s also associated with a drop in citizen complaints.
Even though police departments are adopting the cameras, they tend not to look at the footage. They might look at footage from a case where there was a use of force, or there was a complaint. But the vast majority of the footage is unexamined. Some of that is because police departments tend to think about the footage as evidence for isolated cases. And some of it is just about just not having the capacity to look because it’s so much footage.
I’ve been working with an interdisciplinary team of researchers here at Stanford to use machine-learning techniques to begin to pore through this footage. What we’re doing is looking at the language that is used during routine stops, traffic stops. We’ve done studies where we’ve looked at, say, a thousand different traffic stops, and have found that officers, even when they’re being professional, they speak in ways that are less respectful to Black drivers than to white drivers. Not just in the words they use, but their tone of voice and so forth. In fact, we can make predictions based on the officer’s use of language alone whether they’re speaking to a Black driver or to a white driver.
We’re looking at this use of technology as a way to learn more about what’s happening during police-community interactions, to try to understand more about what’s causing those interactions to escalate in some cases, to try to get a gauge on how healthy or unhealthy the interactions are. We feel like the more we know about what’s happening on the ground, the better we are at implementing policies and tactics that are going to improve those relations between police and community rather than undermining them. Officers can’t be held accountable, really, if they don’t believe that the footage is going to be examined.
This interview has been edited and condensed for clarity.