Using AI for content: Why editorial judgment matters

Peter Stuart · · 6 min read

Since ChatGPT first broke out into the world in November 2022, the question of how AI-written content would change the internet has abounded.

From the outset, site owners have been tempted to use AI writing tools to create mass content at scale to generate more traffic or even game Google’s own algorithm. Google was fairly quick to introduce tighter spam-fighting tools, but in March 2024 went one step further with their Scaled Content Abuse update.

While Scaled Content Abuse has often been associated with AI writing, it actually targets any low-value content made at scale.

“If someone fires up 100 humans to write content just to rank, or fires up a spinner, or a AI, same issue,” Google’s Public Liaison for Search Danny Sullivan explained on X. He went a step further to explain that AI content is not the enemy. “We haven’t said AI content is bad. We’ve said, pretty clearly, content written primarily for search engines rather than humans is the issue.”

Pull back and you have the wider question about using AI to augment content generation – are you using it purely to create more traffic or to be able to produce better content more quickly?

Google, and indeed any platform, aims to reward trusted and valuable content above all else. That’s why AI needs to be used in a way that feeds a wider content strategy responsibly.

In December 2025, Google ramped up its targeting of AI content even more closely.

Google’s latest update

Between 11 and 29 December 2025, Google rolled out a major core update. Within days, publishers running AI-heavy content hubs were reporting visibility drops of 30 to 50 per cent on programmatic pages. One interesting case study is software marketing platform G2, which was hit fairly conspicuously with the algorithm change.

You probably don’t run a site at G2’s scale. But the ingredients that sank G2’s search visibility – programmatic templates, thin oversight, content that exists because it was cheap to produce – is mirrored in many small scale experiments with written AI content too.

The December update, and wider trends in search, raise a question for any publishers using AI tools for writing: if you automate the writing, how are you adding value?

What actually changed in late 2025

The December 2025 core update expanded Google’s emphasis on “first-hand experience” well beyond its traditional focus on health and finance topics. Google’s guidance increasingly emphasises original information, real expertise, and genuine value – attributes its systems appear to reward because they align with people-first search outcomes. Author bios alone don’t cut it anymore. The systems want specificity: dates, original photos, named experiments, sourced data.

At the same time, AI Overviews kept eating clicks. An Ahrefs study using December 2025 data found that when an AI Overview appears, the top organic result sees a 58 per cent lower average CTR. Across all searches, 60 per cent now end without a click to any website. On mobile, 77 per cent.

The two shifts compound each other. When a page summarises the same data that Google’s own model can synthesise, users just read the AI Overview and move on.

The casualties

Posts on LinkedIn suggest G2 lost an estimated 80 per cent of its organic search activity since 2023, which is supported by Semrush visibility data. Its database-driven review pages – once dominant – are being bypassed by AI Overviews that synthesise the same user reviews directly on the SERP. Users never need to click through.

And it’s not just Google. Mediavine terminated over 500 sites from its Journey ad platform in 2025 for overuse of unedited AI content. Raptive (formerly AdThrive) rejected 13 per cent of all 2025 applications specifically for AI-generated material. Banned domains are now being reported to classification agencies like DeepSee, which creates a permanent flag that follows a site across ad networks. Get banned once and the stain travels.

The spectrum

Google does not operate a binary AI detector, and it doesn’t see the value of content in a binary pure human vs AI spectrum. In terms of outcomes, though, the observed pattern seems to fit a spectrum.

At one end: fully automated sites, no human review, published at scale. De-indexing and ad network bans can land at the same time, which strips both visibility and revenue in a single week.

Next: AI-generated with very light edits, generated across a very wide spectrum with thin content. Not specifically focussed on AI, an example was shared by DMARGE’s Luc Wiesman on Medium, highlighting his brand’s drop in visibility on account of what was perceived as thin content. This type of content is technically correct but compressible – and Google’s own model can compress it.

Then: AI-assisted with real editorial judgment, what is increasingly being classed as ‘hybrid’ content strategy. AI drafts the structure or pulls initial research; a human rewrites around their own conclusions, adds original data, takes a position. Several analyses showed that sites operating this way generally held their visibility through the update.

And at the other end: fully manual, expert-led content. Clearest expertise signals, but constrained by headcount in a landscape of increasing squeeze on publisher income. Hard to sustain if you’re publishing daily with two people.

Most small publishers are trying to land in that third category. The question is what “real editorial judgment” actually means day to day.

What holds up

Four patterns generally emerge for sites and journalists using AI to augment their output effectively.

Original work over summary. Pages that hold top rankings often include at least one element AI Overviews can’t currently reproduce. Proprietary data. Original test results. Dated experiments with screenshots.

Sources the model can’t see. AI is trained on past data. It doesn’t know what an industry contact said last Tuesday. Even a two-sentence quote from a real person – attributed, specific – signals to both readers and Google that a human originated this content. Publishers who conduct original interviews, even brief ones over email, will see those pages outperform their templated equivalents.

A voice that takes positions. Sites with strong entity-level signals – a recognisable author, opinions that recur across the site, are increasingly prospering. “The ultimate guide to X” is replaceable. “Why X is overrated: what our testing actually showed” is not. That shift requires human judgment, and that judgment appears to be one of the signals Google now weights.

Draft-first, human-final.

None of this is free. Adding original research and interviews is slower than pasting AI output with a light edit. But the alternative – watching traffic erode with every core update while your ad network reviews your account – is worse.

Where this leaves small publishers

The zero-click reality means even well-optimised pages attract fewer visits than they did two years ago. When an AI Overview appears, many users get what they need without clicking through.

But this doesn’t mean publishing more content is futile. It means publishing replaceable content is.

Pages that closely mirror what AI Overviews already summarise struggle because they add no new signal. By contrast, content that introduces original testing, first-hand reporting, or a clear point of view within a defined niche continues to earn visibility, and, increasingly, citations.

For small teams, the opportunity isn’t to publish less. It’s to use AI to remove the mechanical work: drafting structure, gathering background, handling repetition. That means human time can be spent where it still compounds: testing something, talking to someone, forming an opinion and defending it.

The future isn’t artisanal publishing versus scale. It’s expertise operating at scale, and small teams have never been better equipped to do that.

Peter Stuart

Written by

Peter Stuart

Co-founder of Velora. Former Cyclingnews editor and digital editor of Rouleur, with nearly 20 years leading editorial strategy and content operations for publishers.

LinkedIn →

Velora helps publishers draft faster. We monitor your sources, research each story, and deliver structured drafts to your CMS — so editors can focus on what matters.