Is AI in journalism a good or bad thing? It’s not that black and white 

By Alan Martin 

Ever since Open AI released ChatGPT 3.5 in late 2022, we’ve been repeatedly told — both by reliable experts and less reliable marketers with products to shift — that generative AI is going to change everything. With over 400 million users per week using ChatGPT, it’s certainly on its way to doing that for many people. 

But despite the hype, artificial intelligence has a real stigma to overcome in professional use. Just look at the stir that Britain’s technology secretary, Peter Kyle, created when a Freedom of Information request revealed a relatively benign use of artificial intelligence earlier this month.  

There seems to be a feeling that artificial intelligence is a cheap and lazy shortcut: an alternative to doing things properly. Certainly, some high-profile mistakes haven’t helped the cause, like the lawyers whose alleged use of ChatGPT has created fake cases as legal precedents. 

But even as someone who can count the number of times he’s used generative AI on a single hand, I can safely say this is an equally lazy mischaracterisation. There are ways that AI can be a real benefit to journalism — it’s just a case of leaning on it in the areas where it can make things better, rather than in a race to the bottom to replace human labour. 

AI the wrong way 

I can completely understand why people feel squeamish about the idea of artificial intelligence in journalism, especially as the way it’s being forced on people is via chatbots, which are prone to embarrassing hallucinations. Google’s AI search summaries famously suggested putting glue on pizza and eating rocks, after all.  

Add in the known race and gender biases of artificial intelligence — a consequence of the imperfect historical human data it’s trained on — and the idea of AI coming anywhere near fact-checking feels absurdly dangerous. 

Even if these problems were fixed overnight, there would be some uses that feel pretty objectively egregious, too. Some cash-strapped publishers might, for example, see AI as a way to save cash on journalist salaries and actually have generative AI writing (or, more likely, paraphrasing) news stories.  

One grim look into this dystopian future of journalism was visible in 2023 when the GAMURS group — fresh off the back of making at least 30 redundancies across its portfolio of news websites — sought to hire an AI Editor, responsible for editing an ambitious “200 to 250 articles per week”. Other than fixing AI whoopsies, the human role would be reduced to donkey work: “write headlines, add links, add images, and other WordPress production processes”. The backlash was, understandably, enormous, and the listing was thankfully taken down. 

Even what on the surface of things can seem like intelligent usage of AI can badly backfire. The LA Times’ bias meter was designed to examine opinion pieces, label them as being from a political point of view, and then provide “different views” as a kind of automated counterpoint. But it took this even-handed approach to the extreme on day one, defending the Ku Klux Klan as a “product of ‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement”. The offensive note was eventually taken down by human staff, but it’s a worrying window into a world without journalistic oversight. 

Picking the right battles 

The good news is that there are clear, positive ways in which artificial intelligence can help journalists while supplementing their skillset rather than replacing them. 

The most useful example is in efficiency. Imagine a Freedom of Information request has elicited a huge spreadsheet of several thousand pages that’s designed to stall rather than to be helpful. With the right prompts, artificial intelligence could point curious reporters to interesting details that might take weeks of examination to uncover otherwise. 

There are more mundane, but equally helpful, uses, too. Coming up with inspiration on topics to discuss with an interview subject, for example, or even subediting a writer’s work. For me, automatic transcription of interviews via Otter.ai has been a game changer, saving hours of tedious work with my foot pedal. 

My journalistic Achilles’ Heel has always been headline writing. I hate it and always struggle to get a snappy hook that’s compelling enough to click on. Though I haven’t tried it yet, many swear by AI’s ability to come up with dozens of headlines in the blink of an eye, letting the human writer pick out a compelling one that suits the story, or merely use it as a springboard for coming up with their own twists. 

These are all time-savers that, done right, can enhance rather than detract from a journalist’s writing. The thing that all of them have in common is that they must have human oversight. AI edits, suggestions or research cannot be taken at face value, and any attempt to use AI unquestioningly deserves to fail and be publicly called out. 

The good news is that this plays to a journalist’s instincts: to be sceptical, to research and to ask questions. When used responsibly, there’s no reason that AI can’t be another useful tool in a journalist’s arsenal: one that can take their output to the next level.