'There's a chilling effect': Google's firing of leading AI ethicist spurs industry outrage - Protocol

Source Code: Your daily look at what matters in tech.

yesAnna KramerNone
×

Get access to Protocol

I’ve already subscribed

Will be used in accordance with our Privacy Policy

Where should we send your daily tech briefing?

×
People

'There's a chilling effect': Google's firing of leading AI ethicist spurs industry outrage

Timnit Gebru's firing could damage Google's reputation and ethical AI research within tech companies, industry leaders told Protocol.

'There's a chilling effect': Google's firing of leading AI ethicist spurs industry outrage

Timnit Gebru said that she was forced out of Google because of an email she sent to members of the Google Brain Women and Allies listserv.

Photo: Kimberly White/Getty Images

After Google fired one of the industry's most respected and well-loved AI ethics researchers on Wednesday, Google employees and tech industry leaders alike voiced their fear that her firing will have a "chilling effect" on ethics research within tech companies and at Google specifically.

Timnit Gebru, the now-former technical co-lead for Google's AI ethics team, said that she was forced out of Google because of an email she sent to members of the Google Brain Women and Allies Listserv that detailed her frustration with the company's diversity pledges and the exhausting experience of being a Black woman at Google, as well as conflict over an ethics research paper that Google wanted retracted. Over the last week, Gebru had been fighting to have the research paper — which discusses the ethics issues of large language models — published with her and other Google employees' names.

Get daily insights from the Protocol team in your inbox

After Gebru said that she would plan to resign if Google didn't commit to further discussion about the company's demands over the research paper, Google immediately rejected her conditions and terminated her employment without discussion, according to Gebru's statement. In the email explaining her termination shared by Gebru, Google Research Vice President Megan Kacholia wrote that Gebru's email to the Listserv was "inconsistent with the expectations of a Google manager." Google declined to comment.

Gebru is best known for her research on discrimination within facial recognition models, including a groundbreaking study that illustrated gender and skin-type bias in the best commercial AI facial recognition systems at the time. "She's literally the best of the best. She's the best that we've got. Not only does Timnit encapsulate our hopes and dreams, and is the embodiment of the best of us, but she is strongly supported," said Mutale Nkonde, the CEO of AI for the People and a fellow at Harvard's Berkman Klein Center.

Gebru is also well-liked for supporting activism within Google and defending employees who've lost their jobs because of their protests. Shortly before she tweeted that she had been fired, the National Labor Relations Board filed a complaint that said Google had violated labor laws by spying on and then firing workers who were organizing employee protests. "If we have heroes in the AI ethics community, she's one of those heroes," said Susan Etlinger, an AI expert at the Altimeter Group. "She's someone who has, at great cost to herself, persisted in identifying, publicizing and trying to remediate a lot of the issues that arise with the use of intelligent technologies."

A number of people on her own team and others within Google tweeted their support for Gebru and anger with their employer. That includes Alex Hanna, a senior research scientist on the ethical AI team, who said that "to call her unbecoming of a manager is the height of disrespect." Dylan Baker, another Ethical AI team member, called her "the best manager I've had."

Gebru's departure could be damaging for Google's reputation in the ethical AI community and among tech workers broadly. The support for Gebru in the industry is nearly unanimous, and every leader who spoke to Protocol for this story echoed the same two sentiments: She is among the best at her technical work and Google's decision to fire her shocks and angers them. "The idea that this is going to be able to happen, and it's going to go away and it's not going to have an impact on tech … Google really needs to really look at itself in a mirror," Nkonde said.

All of the industry leaders who spoke with Protocol voiced their fear that her firing would have a chilling effect on other ethical researchers in the industry and at Google specifically. Academics and activists have long expressed skepticism about the integrity of ethical AI research at places like Google, but Gebru's reputation and leadership role lent credibility to Google's research and helped quell the critics. Earlier this year, Google even announced plans to launch an ethical AI consultancy that would provide tips for difficult problems learned from Google's own research and experience.

In firing her, Google not only gave up the voice that earned the ethical AI team respect in the first place, but also made it clear that there were consequences for speaking up, said Ansgar Koene, the global AI ethics and regulatory leader at EY and senior research fellow at the University of Nottingham. "Their division does great work, except a lot of the times they have their hands tied behind their backs because of such repressive policies," said Abhishek Gupta, a machine-learning engineer at Microsoft and founder of the Montreal AI Ethics Institute.

Gebru's firing was not entirely unexpected for people who knew her, including Gupta. Just the day before, while Gebru battled to get approval for the ethics research paper, Gupta and Gebru discussed how to create a legal system of protection for ethics whistleblowers inside tech companies. A few days before that, Gebru tweeted publicly that she wished there were a system of whistleblower protections.

"In a sense, this has been a long time in the making. This has, in bits and pieces, happened in the past, where she's tried to bring up relevant issues, and Google has sort of tried to suppress what she's saying," Gupta said, adding: "It's an unfortunate combination of what has been going on for months, I think."

Moving forward, people in her position need significant legal support to be able to express their concerns without fear of losing their jobs, said David Ryan Polgar, the founder and executive director of All Tech is Human. "There's a chilling effect for the people who don't have any type of national stature … You should have the ability to be a roadblock to what you would deem inappropriate activity."

And beyond the research work itself, firing Gebru makes Black women like her less likely to pursue the same career path, AI for the People's Nkonde said. "As Black women in tech, we all face similar issues, and not everybody is going to take the stand to stay within [the] industry," she said. For research scientists currently in school, choosing to work in the industry is far more intimidating after watching Gebru's experience play out, a feeling expressed by a number of those students on Twitter today.

If Gebru had decided to leave Google and announced that she would be going elsewhere, the reaction would have been celebratory, Nkonde explained. Instead, Google's decision to not only fire her but directly email the team she had managed about her departure creates a sense of fear and anger, showing that the tech sector, and Google specifically, "can be a hostile place for Black women," Nkonde said.

Ellen Pao, co-founder and CEO of Project Include and former CEO at Reddit, said that by firing Gebru, Google created an unfixable PR problem that illustrates a more systemic discrimination problem. "When I see Google in the context of its past, it has a terrible record of dealing with bias and discrimination, and it has a record of not hiring people from marginalized, underrepresented groups, not promoting them," she told Protocol.

"I think what it says is actually more important than what it says about Google. What this says is that the work of trying to remediate bias and create fairer technical systems is incredibly hard, and it's not just hard from a computational perspective. It's not just hard from a technical perspective. It's hard because it requires diversity of perspective, it requires diversity across many axes," Altimeter Group's Etlinger said.

Issie Lapowsky contributed additional reporting.

Anna Kramer

Anna Kramer is a reporter at Protocol (@ anna_c_kramer), where she helps write and produce Source Code, Protocol's daily newsletter. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.

Latest Stories