26 September 2023

Ethical dilemma: Who should take charge of the battle for ethical AI?

Start the conversation

Elizabeth Gibney* says bias and the prospect of social harm plague AI research, but it’s not clear who should be on the lookout for these problems.


Diversity and inclusion took centre-stage at one of the world’s major artificial-intelligence (AI) conferences in 2018.

But once a meeting with a controversial reputation, the Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, in December 2019 saw attention shift to another big issue in the field: ethics.

The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies — such as in predictive policing or facial recognition.

Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data and avoiding affecting already vulnerable populations.

“There is no such thing as a neutral tech platform,” warned Celeste Kidd, a developmental psychologist at the University of California, during her NeurIPS keynote talk about how algorithms can influence human beliefs.

At the meeting, which hosted a record 13,000 attendees, researchers grappled with how to meaningfully address the ethical and social implications of their work.

Ethics gap

Ethicists have long debated the impacts of AI and sought ways to use the technology for good, such as in health care.

But researchers are now realising that they need to embed ethics into the formulation of their research and understand the potential harms of algorithmic injustice, says Meredith Whittaker, an AI researcher at New York University and founder of the AI Now Institute, which seeks to understand the social implications of the technology.

At the latest NeurIPS, researchers couldn’t “write, talk or think” about these systems without considering possible social harms, Whittaker says.

“The question is, will the change in the conversation result in the structural change we need to actually ensure these systems don’t cause harm?”

Conferences such as NeurIPS, which, together with two other annual meetings, publish the majority of papers in AI, bear some responsibility, she says.

“The field has blown up so much there aren’t enough conferences or reviewers,” says Whittaker.

“But everybody wants their paper in.”

“So, there is huge leverage there.”

But research presented at NeurIPS doesn’t face a specific ethics check as part of the review process.

The pitfalls of this were encapsulated by the reaction to one paper presented at the conference.

The study claimed to be able to generate faces — including aspects of a person’s age, gender and ethnicity — on the basis of voices.

Machine-learning scientists criticised it on Twitter as being transphobic and pseudoscientific.

Potential solutions

One solution could be to introduce ethical review at conferences.

NeurIPS 2019 included for the first time a reproducibility checklist for submitted papers.

In the future, once accepted, papers could also be checked for responsibility, says Joelle Pineau, a machine-learning scientist at McGill University in Montreal, Canada, and at Facebook, who is on the NeurIPS organising committee and developed the checklist.

NeurIPS says that an ethics committee is on hand to deal with concerns during the existing review process, but it is considering ways to make its work on ethical and societal impacts more robust.

Proposals include asking authors to make a statement about the ethics of their work and training reviewers to spot ethics violations.

The organisers of the annual International Conference on Learning Representations — another of the major AI meetings — said they were also discussing the idea of reviewing papers with ethics in mind, after the conversations in the community.

AI Now goes a step further: in a report published in December, it called for all machine-learning research papers to include a section on societal harms, as well as the provenance of their datasets.

Such considerations should centre on the perspectives of vulnerable groups, which AI tends to impact disproportionately, Abeba Birhane, a cognitive scientist at University College Dublin, told NeurIPS’s Black in AI workshop, in which her study on “relational ethics” won best paper.

“Algorithms exclude older workers, trans people, immigrants, children,” said Birhane, citing uses of AI in hiring and surveillance.

Developers should ask not only how their algorithm might be used, but whether it is necessary in the first place, she said.

Business influences

Tech companies — which are responsible for vast amounts of AI research — are also addressing the ethics of their work (Google alone was responsible for 12 per cent of papers at NeurIPS, according to one estimate).

But activists say that they must not be allowed to get away with “ethics-washing”.

Tech companies suffer from a lack of diversity, and although some firms have staff and entire boards dedicated to ethics, campaigners warn that these often have too little power.

Their technical solutions — which include efforts to “debias algorithms” — are also often misguided, says Birhane.

The approach wrongly suggests that bias-free datasets exist, and fixing algorithms doesn’t solve the root problems in underlying data, she says.

Forcing tech companies to include people from affected groups on ethics boards would help, said Fabian Rogers, a community organiser from New York City.

Rogers represents the Atlantic Plaza Towers Tenants Association, which fought to stop its landlord from installing facial-recognition technology without residents’ consent.

“Context is everything, and we need to keep that in mind when we’re talking about technology,” he said.

“It’s hard to do that when we don’t have necessary people to offer that perspective.”

Researchers and tech workers in privileged positions can choose where they work and should vote with their feet, says Whittaker.

She worked at Google until last year, and in 2018 organised a walkout of Google staff over the firm’s handing of sexual-harassment claims.

Researchers should demand to know the ultimate use of what they are working on, she says.

Another approach would be to change the questions they try to solve, said Ria Kalluri, a machine-learning scientist at Stanford University in California.

Researchers could shift power towards the people affected by models and on whose data they are built, she said, by tackling scientific questions that make algorithms more transparent and that create ways for non-experts to challenge a model’s inner workings.

* Elizabeth Gibney is a senior physics reporter at Nature. She tweets at @LizzieGibney.

This article first appeared at www.nature.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.