When you begin to recognise the way an AI tool writes, you will see its fingerprints everywhere. From Linkedin posts that claim: “This isn’t progress, it’s structural evolution”, to abundant em-dashes or just a strange fixation with stark contrasts and paradigms. The writing signatures of LLM jump out in the most surprising of places once you start to spot them.
Understandably, readers and broad consumers want to know whether the words they are choosing to read and trust have been written - or even read - by the human being who is claiming credit. For journalists that’s particularly pertinent.
Some editors dictate that all use of AI tools constitutes plagiarism. That may strike admiration from those who value the authenticity of the written word. But it also presents contradictions. While a title may have a hardline policy, its external contributors may not – we’ll cover some conspicuous examples of that below. Equally, LLM tools now abound across all layers of written workflow – from transcription tools built into hardware to spelling and grammar tools in almost any software. Deciding where to draw the line is as much related to ethics as practical enforceability.
The complexity of the landscape means that a solid and reasonable AI policy is essential. The question for journalists is whether that policy, and disclosure is site-wide or declared on each article where AI has been used. The answer is evolving, and the EU is about to force the hand of editors and publishers.
But that doesn’t mean that journalistic content where AI has been used will necessarily need to declare it.
EU laws
In reality, the landscape is complicated with almost all digital written copy likely to be touched by some form of AI.
“Is your newsroom prepared for this?” That was the question Mia Sahl, a Google News Initiative trainer for Northern Europe, posed on LinkedIn in February 2026, alongside a summary of the EU’s incoming transparency rules for AI-generated content. The post, which included an AI-generated image watermarked with Google DeepMind’s SynthID tool, shared the imminent legal changes, but also the nuance of disclosure for journalistic purposes – which is set to be very different to image and video content.
The BBC previously banned generative AI from factual research and news story generation outright. The Guardian has loosened its AI policy in recent weeks but still maintains a sharp focus on writing reflected human lived experience. The Associated Press has spent years working with local newsrooms on human-in-the-loop systems for automating routine coverage. Various large and small publishers, meanwhile, have been quietly using AI to draft explainers, rewrite press releases and wire copy, and update evergreen pages since 2023.
All of them face the same deadline. On 2 August 2026, the EU AI Act’s transparency provisions become enforceable, and the question of whether journalists should disclose AI use shifts from editorial debate to legal obligation.
The short answer, as Sahl outlined: for text, disclosure depends on whether a human editor can demonstrate meaningful oversight. For synthetic images, audio, and video, labelling is mandatory and must be visible immediately.
What the EU AI Act actually requires
Regulation (EU) 2024/1689 establishes the first horizontal legal framework for AI transparency in Europe. Its general position is that AI-generated text of public interest must be labelled. But it carves out a specific pathway for journalism: if a human editor takes full responsibility for a piece, no outward AI label is required.
This is the editorial exception, and it reflects an older principle. Journalism has always relied on editorial ownership as the unit of credibility. A bylined article carries the weight of whoever signed off on it, regardless of how the reporting was assembled. This has long since been the established norm for wire or agency copy that has historically been authored by an in-house journalist even if a substantial amount of the work was lifted wholesale from licensed copy. The AI Act formalises that logic, but also requires a little more due diligence on behalf of the editorial team.
To claim the editorial exception, a newsroom must be able to prove “meaningful human oversight.” That means recording which specific editor took responsibility, how the AI-assisted content was reviewed and verified, and when the final version was approved. These are the records an organisation would need to produce if challenged.
It’s a departure from the historic way of working for much of the publishing world, where in some titles production editors would keep track of the editing and sub-editing process for each article. For smaller titles it may require either a more meticulous production audit process, or a technical solution such as the auditing records built into Velora’s app.
The final Code of Practice is expected in June 2026, two months before enforcement begins. Violations of the transparency obligations carry administrative fines of up to €15 million or 3 per cent of worldwide annual turnover, whichever is higher.
Synthetic media faces a harder line
The editorial exception applies to text. It does not extend to AI-generated or significantly manipulated images, audio, or video that realistically depict people or events.
For synthetic media, the requirements are much more blunt: content must be labelled “at first sight” with permanent visual or audible disclaimers. There is no editorial carve-out. A newsroom that publishes an AI-generated illustration alongside a feature story cannot rely on editorial oversight to avoid labelling the image. The image needs its own disclosure, visible before a reader scrolls past it
The deepfake risk for visual and audio content is qualitatively different from the risk posed by AI-assisted text. Regulators have drawn the line accordingly. In practice, it means publishers will need parallel workflows: one for text, where oversight is documented internally, and another for media assets, where disclosure is public-facing and immediate.
There will perhaps be a grey area between images that are edited using AI tools - a practice which has been common across photography for nearly a decade, and images where the core representation has been clearly appropriated by an AI tool.
The provenance layer underneath
Liability under the AI Act is shared. Publishers handle visible disclosure. Technology providers must ensure their systems produce machine-readable metadata that tracks a piece of content through its lifecycle. As Sahl put it: tech providers handle the machine-readable layer, publishers are responsible for the visible disclosure to the audience.
Google DeepMind’s SynthID is one implementation. It embeds an imperceptible watermark into the pixels of images or the token distribution of text, designed to survive modifications like cropping, filtering, and compression. Two deep learning models work in tandem: one applies the watermark, the other identifies it.
That measure, in theory, will mean that there is no clear watermark in invisible characters or specific output patterns but instead, in the words of DeepMind itself, introduces “additional information in the token distribution at the point of generation by modulating the likelihood of tokens being generated.” Effectively the machine generated copy will have adjusted outputs giving it a digital signature which can be detected, especially when using minimally edited or adapted copy.
Deepmind’s description of the SynthID concept accepts that copy will be difficult to track when put through AI translation or thoroughly rewritten. For those trying to retain conspicuous use of AI writing that will provide one blackhat alternative to transparency.
SynthID will also need to be adopted by other AI foundation models, or risk simply being a means to identify Gemini output, rather than machine output more broadly. That’s an industry-wide shift that may prove challenging, especially with the vast ecosystem of open weight and finetuned models which may be easily deployed to rework written copy to remove a text watermark. Ultimately, it reaffirms that transparency is better opted into than mandated.
At the industry level, the Coalition for Content Provenance and Authenticity (C2PA) is building a broader standard. Its “Content Credentials” protocol uses cryptographically signed metadata to document a media asset’s origin and editing history. These standards will increasingly become part of a publisher’s tech infrastructure, even for publishers who never display a label on text. Ad networks, syndication partners, and platforms may require provenance data for media assets regardless of whether the law compels a visible disclaimer in a given jurisdiction.
How newsrooms are responding
The institutional responses so far reveal a gap between large organisations and everyone else.
The BBC’s generative AI steering group treats the technology as a “creative assistant” for translation, subtitling, and archive maximisation, while prohibiting it from the newsroom’s core function of producing factual journalism. The AP’s work with local newsrooms has focused on automating mundane outputs like sports scores and social media posts, with human-in-the-loop safeguards to prevent AI-generated misinformation from reaching publication.
These are resourced organisations with dedicated compliance teams. For a smaller publisher running a four-person editorial operation, the challenge is different. Adding an AI drafting step to a workflow is quick. Adding a logging step, with named responsibility, timestamped approvals, and retained records of how each review was conducted, is an operational cost that scales with every piece of content, but can be resolved technically.
The EU’s framework does not distinguish between the BBC and a regional news site with two editors. The documentation threshold is the same, though likely litigation and enforcement will vary a lot between niche publishers and media giants.
The reader side of the equation
Regulation is one pressure. Reader expectations are another, and they do not always align neatly with what the law requires.
Research involving focus groups suggests audiences want standardised transparency measures, driven partly by a desire for accountability when things go wrong. Readers want to know not just whether AI was involved, but who is responsible for the result. That instinct maps well onto the editorial exception model, where a named editor’s sign-off is where the buck stops.
But there is also a detection bias at play. Audiences react with irritation when writing reads as machine-generated, regardless of whether AI was actually used. Repetitive structures, excessive abstraction, the presence of certain rhetorical tics: these trigger scepticism before any label does. Research on reader perception indicates that transparency empowers readers to take protective action against potential manipulation, but it also suggests that disclosure alone does not restore trust if the work itself feels automated.
An interesting case study in reader reaction came from Panagram’s analysis of a Guardian writer’s copy, which found that a large number of recent articles presented within the platform as 100% AI generated, with no AI disclosure in place. The Guardian denied that any AI had been used in the writing process, but public backlash on social media was considerable.
A label saying “AI-assisted” on a piece that reads like an undifferentiated summary may satisfy the law. It will not satisfy the reader.
What sits across the Atlantic
The United States has no equivalent horizontal framework. Instead, a patchwork of state-level laws targets specific harms. New York’s S.8420-A, signed in December 2024, requires disclosure of “synthetic performers” in advertisements from June 2026, but explicitly exempts media outlets from liability for publishing non-compliant ads placed by third parties. A December 2025 White House Executive Order discouraged further state-level AI legislation in favour of a unified federal approach, though no federal disclosure law has materialised.
For publishers operating across both markets, the EU’s framework will likely set the effective floor. Compliance built for Article 50 will generally exceed anything a US state currently demands.
Six months out
The regulations will crystallise questions that have already abounded in the industry. Who owns the byline when AI drafted the first version? What records do you keep, and where, when someone asks months later how a sentence was produced? When a synthetic image illustrates a story, what does “at first sight” disclosure look like in a social embed or a syndicated feed? How will reader perception influence disclosure decisions when some outlets are clearly not transparent, or cognisant, of AI use in their newsrooms or editorial teams.
These are workflow questions, and they will vary by organisation. The rules arriving in August do not resolve them; they make them unavoidable. What they reward is something most credible newsrooms already claim to practise: editorial ownership that can be shown, not just asserted. The publishers best positioned for the new regime are those whose work already carries the marks of human judgment, sourced evidence, and recognisable editorial voice.
These disclosures may become increasingly watered down as AI tools become more and more integrated into workflow and writing processes. Journalists and editors may begin to struggle to draw the line between tools that transcribe and verify copy against those that generate initial drafts in terms of authorship. With much of journalistic and digital content being informational rather than specifically investigative, the distinction of authorship may become more abstract for many – if the information is verified, between the author, editor and reader, what difference will the machine role in writing play.
Reader trust will remain dependent on the quality of the journalism underneath it, but it will be critical for teams and publishers to be able to prove that the tools and processes they have put in place preserve the human direction that’s necessary for trusted content.
Written by
Peter Stuart
Co-founder of Velora. Former Cyclingnews editor and digital editor of Rouleur, with nearly 20 years leading editorial strategy and content operations for publishers.
LinkedIn →Velora helps publishers draft faster. We monitor your sources, research each story, and deliver structured drafts to your CMS — so editors can focus on what matters.