Should journalists use AI for writing? It depends, but why does no one talk about the audience?
Another week, another heated online debate on how journalists are using – and should (not) be using AI. So some musings from my end below.
In case you have missed the discourse (fair enough I say), the proximate cause here is the recent WIRED piece on six journalists, including Kevin Roose and Taylor Lorenz, on their AI use as well as yesterday’s WSJ piece by Isabella Simonetti profiling a Fortune journalist who has gone “all in” on AI, cranking out hundreds of stories over the past six months.
The responses have been, shall we say…polarised. Some see a glimpse of a beautiful, shiny future with journalists freed from drudge work to focus on the thinking and reporting. Others see the end of craft, the rise of AI slop, journalism fully succumbing to tech and so forth. Both camps make reasonable points but are also missing (in my view) the more interesting question (Spoiler: audience reactions and acceptance & the new’s ability to provide something that is meaningful and valuable/worth paying for/paying attention to).
Briefly, why are some people so upset? My guess is that this rubs up against deep-seated views within the profession about what journalism should be. Journalism has long had an almost artisanal self-image, reinforced by the industry itself but also deified by broader entertainment narratives (e.g. “All the President’s Men”, “Spotlight” but also the recent (great) documentary about the New Yorker): the dogged reporter, the carefully turned phrase, the hard-won source, the deep thinking, holding power to account etc., etc. – you know the drill. AI-assisted production of any kind, but especially AI-assisted writing sits uneasily with that for some. But a static notion of professional identity isn’t a good argument (in my view) against the use of AI and “it feels wrong” isn’t a useful position either. Plenty of things that felt wrong turned out to be fine (and vice versa). Funnily enough, the idea that journalism is mainly about writing is not only pretty narrow and belies the work of e.g. video reporters or photojournalists, it’s also a bit out of sync with the history of news. The idea that journalists would report and write their own stories is…actually fairly modern (see e.g. here a journal article by Will Mari on the history of how the telephone changed journalism and the story of so-called re-write desks)
The more important question that a lot of the “I like this/I hate this” posts are missing is what audiences will make of journalists this. And here the picture is more sobering than the enthusiasts might like. In research with colleagues at the Reuters Institute, we’ve documented what we call the “comfort gap” between AI- and human-driven news production. Most people, across countries, for now say they prefer journalism made by humans. In the US, 43% are comfortable with news produced “mostly by a human journalist with some help from AI” — which is roughly what the WSJ piece describes. But only 25% are comfortable with news made “mostly by AI with some human oversight.” Certainly not catastrophic numbers, but they’re a long way from a ringing endorsement. And they suggest a ceiling on how far audiences are willing to follow, at least for now. In practice, people might not always notice if a journalist uses AI, there is a lot of grey and white which the question doesn’t capture (i.e. what do people have in mind when we ask them “with AI”), and these attitudes are stated and things might look very different in observed behaviour. But I think it’s worth bearing this in mind, even though I personally think people will adapt over time and became more acceptable of AI use for journalistic work.


Yet, the audience question isn’t just about comfort but also about value and this is where I think the main tension lies.
The bulk of news, as most of the public encounters it, is already largely commoditised as Rasmus Kleis Nielsen reminds us here. Generic, substitutable, and of little perceived value in terms of willingness to either pay attention or pay money. That’s not a new problem and predates the arrival of AI by decades. But AI threatens to make it worse, because the most obvious use case — the one the WSJ piece describes — is producing more of this commodity content, more cheaply, and faster.
To anyone who has not been living under a rock this phenomenon will sound familiar: enter Jevons paradox, once again (For those unaware: Jevon’s paradox the TLDR version is essentially “improvements in the efficiency of coal use didn’t reduce coal consumption but increased it because greater efficiency made coal-powered activity cheaper and therefore more attractive”). If AI makes it cheaper to produce a news story, the temptation for outlets (and individuals) isn’t to produce the same number of stories with fewer resources and invest the savings in investigative reporting or the like (or anything that might make for more individual, standalone stories that people will value). It’s to produce more stories. More, more, more. The efficiency gain gets eaten by volume not reinvested in quality. I made this argument in a paper back in 2024 and it has held up remarkably well.
Now, it doesn’t have to go this way. If the surplus time is used to do more actual reporting (think: more sources, more verification, more original work) then AI as a journalistic tool does not need to be a bad thing at all. But the incentive structures don’t seem to point in that direction in many cases. Incentives are gonna incentivise and they currently point towards a competitive logic of more content, faster, across more platforms. AI is, in this sense, a technology of rationalisation (again, argument made here in academic form), giving the news new means in achieving existing ends. But if the ends are problematic, AI use may well end up making it worse from the point of view news organisations should care about: their users/customers/citizens/audience (pick a term you like).
There’s a distributional point here too which is linked to the argument about rationalisation: AI doesn’t just help to change what gets produced but also rearranges who produces it. Some journalists (those who adopt early, who learn to use AI as leverage) will likely gain an advantage or already do. Others will find their positions eroded. That’s not unique to the news of course. To quote Andreas Jungherr: “AI alters social outcomes primarily by enabling some actors to pursue their goals more effectively than others, thereby generating localized power shifts.” (Preprint here)
None of this means journalists shouldn’t experiment or work with AI tools and systems in my view. They clearly should, and many of the use cases being discussed — transcription, generating pushback, identifying gaps in an argument, helping with reporting — are genuinely useful despite the pitfalls involved, the care needed and the many arguments about dependency, cognitive decline and so forth (on which there is more to say in the future and elsewhere). But I think this entire debate would be better served if amidst all the passionate/embittered arguments from journalists themselves, people grappled more honestly with the people the news (or any form of content) is ultimately meant to serve – and some of the broader political economy/social change arguments that in my view make this topic so interesting but also so fraught. ENDS.
(And yes, I did use a bit of Claude to polish this – you can’t expect a non-native speaker to write without tying themselves in nots on a Friday evening 20min before dinner time…).
Addendum – 30th March 2026
Over on LinkedIn, Spanish data journalist Kiko Llaneras (El País) made this interesting observation which I think is worth sharing: “I really think the potential of AI is incredible for a journalist trying his best: you can use these tools to make your work better —deeper, more accurate, personalized or interactive, data-rich, clearer. I cannot see how to defend the opposite beyond egos. Of course, AI can also be used simply to economize. Not new: bad journalism has always been cheaper.
Felix’s question is the right one: what do readers want from us? We often ignore them. An example: Readers want (most of the time and from most of us) clear writing and practical information, but many journalists still try to force artisanal, convoluted prose on them. In my article I write about the difference between the *tasks* of a job and the *purpose* of a job. AI will surely change the tasks of journalism, and that matters for us journalists. But readers? They are interested in our purpose.”



Exactly. It’s not about us; or it shouldn’t be. It should be about the people we say we serve - communities and audiences - and what they need.