How to prevent AI from exacerbating diversity and inclusion in the journalism industry  

By Nicholas Fearn 

Many news publishers are adopting artificial intelligence systems and tools to streamline content production and reduce operational expenses as they battle declining print and advertising revenues.  

A notable example is the news agency Associated Press, which uses AI tools for creating article summaries, transcribing videos, generating headlines, writing short reports and translating stories. Similarly, UK-based regional news publisher Newsquest now employs 36 AI-assisted reporters. They use an AI-powered content management system to create news reports from press releases automatically.  

But while AI can help increase productivity across newsrooms, experts warn AI systems could exacerbate inequalities that have long plagued the journalism industry. When improperly built and managed, these systems can leave marginalised communities voiceless and generate racist, sexist and other misogynistic outputs. So, how can newsrooms and journalists use AI responsibly?  

AI is making diversity in journalism worse  

One of the biggest problems with AI-powered news tools is that their underlying datasets are often biased. This largely happens as the humans inputting the data are either influenced by personal and societal unconscious prejudices or lack diverse backgrounds. For example, as the technology industry is overwhelmingly male-based, AI chatbots may provide sexist outputs. 

Kirsty Olczak, journalist and editor of health technology news outlet Health Journal, explains that AI systems learn and disseminate these biases. She says this results in marginalised voices being “overlooked” in journalistic content. 

And as automated news systems aren’t human, she says they lack the “lived experiences and perspectives” required for creating diverse journalistic content. Olczak continues: “If AI is used without careful oversight, it could reinforce existing inequalities rather than help to address them.” 

Citing similar concerns, Admire Mare – head of communication and media studies at the University of Johannesburg, South Africa – describes AI as a “product of unequal society” caused by the lack of diversity and inclusion across human values, languages and viewpoints.  

Using AI tools that perpetuate these inequalities means the media industry is setting itself up to fail, argues Mare. He adds: “It means the framing and manufacturing of news is likely to further accentuate this problem.” 

Another prevalent issue affecting AI technologies is that they exhibit systemic weaknesses in generating factual language output in countries with limited freedom of the press and high levels of state propaganda, according to McKenzie Sadeghi, an AI researcher at online information rating system NewsGuard. 

Because most publicly available information in the languages of these countries is unreliable, chatbots are more inclined to provide it as an answer to users. Sadeghi explains that such biases mean AI applications “become conduits for government propaganda”. 

Best practices for using AI in the newsroom 

For newsrooms developing, implementing and using AI systems, taking steps to mitigate risks around diversity and inclusion is clearly paramount. Many of these issues stem from a lack of diverse training data that powers AI algorithms and systems, so data auditing is an essential first step. 

Dr. Serena Huang, author of the new book “The Inclusion Equation: Leveraging Data & AI For Organisational Diversity and Well-being”, says these audits should consider the demographics, perspectives and language used in AI training data. 

Secondly, Huang urges news organisations to ensure their AI development teams include experts from a diverse range of backgrounds and experiences. She adds that these experts are “more likely to identify potential biases” in AI data and algorithms.  

Thirdly, she advises organisations using AI for creating content to be clear about this to readers so that they can “critically evaluate the information”. She says, “I am a strong proponent of labelling AI-generated and AI-assisted content for readers.” 

For Olczak, human oversight is crucial in ensuring biased AI-generated content doesn’t slip through the cracks and erode journalistic integrity. This sentiment is echoed by Laura Davison, general secretary of the National Union of Journalists, who says that without proper oversight of AI content, people could lose trust and faith in journalism.  

She adds: “AI-generated journalism cannot replace the knowledge, creativity, or skill of journalists. Our members should have full control through agreements over their work, how it is used, and what they are paid.” 

Meanwhile, Mare urges publishers to avoid tokenism when it comes to improving the diversity of their AI and tech teams. Instead, he says diverse team members should be given “meaningful participation aimed at engendering pluriversal knowledges, values and perspectives in the AI ecosystem”. 

Paul Doyle, director of video strategy and delivery at British magazine publisher Immediate, says newsrooms can improve the diversity of their AI tools by gathering feedback from all departments on “how and where” they should be used. Doing so will “ensure use cases are valid”. 

What journalists can do 

With more and more journalists adopting AI tools, they have a personal responsibility to ensure their usage is ethical and doesn’t harm diversity and inclusion. Most importantly,  

Olczak says journalists need to view AI as a “tool” rather than “a replacement” for reporting. 

If journalists are using AI tools for generating content, Olczak says they must fact-check the provided information as it could be incorrect or misleading. And journalists must keep using a diverse variety of sources for their stories, she adds.  

Echoing similar views, Sadeghi says reporters should refrain from using AI tools to “replace steps in the reporting process”. Instead, they should use it as a way of simplifying time-consuming aspects of their roles, such as analysing public data or identifying trends to use in their stories, to dedicate more time to reporting.  

Furthermore, she advises reporters not to use AI for covering communities with which they’re unfamiliar, as this will help them avoid making misunderstandings or misrepresentations.  

To prevent AI models from “distorting content” through “scraping and repacking”, Doyle emphasises the importance of ensuring AI-generated content meets the highest of editorial standards. This involves performing regular AI bias audits to ensure algorithms can effectively “capture nuance, authenticity and inclusion”. 

AI can be a force for good 

With the right checks and balances in place, AI could improve diversity and inclusion across newsrooms. For example, Doyle says journalists can use these tools for spotting and fixing biases in their work. He adds: “In essence, AI holds the promise of a more accessible, personalised, and equitable news landscape, where diverse voices are not only heard, but truly understood.” 

Olczak is also optimistic about AI’s ability to be a force for good in journalism, arguing that it can help journalists find underreported stories to cover, translate their reports into different languages and analyse whether their past reporting is diverse enough. She adds:  “I’d even say that if it’s used correctly, it could amplify diverse voices rather than drown them out. 

On the other hand, Mare believes that when AI companies, newsrooms and journalists come together to tackle these issues, there’s the potential for diversity, equity and inclusion to become “mainstream”. At a time when DEI efforts are being undermined in many parts of the world, this seems more important than ever.