Studio Tak logo

AI Ad Disclosure: What the New Rules Mean for Agencies and Brands

AI ad disclosure is no longer optional in New York — and it's heading everywhere else too.

AI Ad Disclosure: What the New Rules Mean for Agencies and Brands

AI ad disclosure is no longer optional in New York — and it's heading everywhere else too.

If you're using AI to create people in your ads — even people who don't exist, even when it seems obvious — you may soon be legally required to say so. New York signed it into law in December 2025. The EU follows in August 2026. More states are right behind them.

Here's what you actually need to know.

What did New York actually pass?

In December 2025, New York Governor Kathy Hochul signed three overlapping laws that directly affect how AI can be used in advertising.

The most significant one is SB-8420A. It requires a conspicuous AI ad disclosure any time an ad uses a "synthetic performer" — which is essentially any AI-generated human-like figure, whether or not it resembles a real person.

It goes live in approximately June 2026.

The other two laws cover real talent. One expands protections for deceased performers, removing a loophole that previously let brands use a disclaimer instead of getting consent. The other — the Fashion Workers Act, already in effect since June 2025 — requires separate written consent before AI is used to extend or replicate a real model's likeness.

What counts as a synthetic performer?

This is the part that surprises most teams.

A synthetic performer doesn't have to look like anyone specific. It's any AI-created or AI-modified audiovisual or visual asset that creates the impression of a human performing.

So if you generate a photorealistic person to wear your product, that's a synthetic performer. If you use AI to composite a person into a new environment, that's a synthetic performer. If your "model" was created entirely by a text-to-image tool, that's a synthetic performer.

What it's not: minor retouching, background removal, or pure text. The law is focused on human-looking performers, not the broader use of AI in post-production.

What does "conspicuous" actually mean?

The law doesn't prescribe an exact format. It just says "conspicuous."

That's actually important. A tiny disclaimer buried in the fine print probably won't cut it. It needs to be something a real person scrolling past your ad could reasonably notice.

Most legal advisors expect this to look like an on-screen text label, a verbal disclaimer, or a visible watermark — applied consistently across formats.

The platforms are already moving in this direction independently. Meta and Google are both introducing mandatory AI content labels for ads. So this likely won't feel out of place for long.

What about real talent and AI?

New York also has rules here, and they're already in effect.

The Fashion Workers Act requires a separate written consent document before you use AI to extend or replicate a real model's likeness — even if you shot with that model before.

This includes using AI to place a real model in new locations, generating additional images of a real person from a single shoot, and modifying a real person's appearance digitally beyond basic retouching.

All of these need a standalone consent document. Not something buried in the talent contract. A separate agreement that specifies what AI use is permitted, for how long, and how the model is compensated for it.

Penalties start at $3,000 per violation and go up to $5,000 for repeat violations.

Is this just a New York issue?

Not really.

Tennessee's ELVIS Act has been in force since July 2024 — the first US law protecting voice and likeness from AI cloning without consent. California's AI Transparency Act is coming in August 2026. A dozen other states have active bills.

The EU AI Act requires machine-readable labelling and clear human-readable AI ad disclosure for all AI-generated content by August 2026. Any brand running ads in Europe will need to meet this standard — and it will effectively set global infrastructure expectations.

The direction is consistent everywhere: more transparency, more consent, more accountability.

Will federal rules override all of this?

This is the most common question right now, and the honest answer is: not yet.

President Trump signed an executive order in December 2025 specifically aimed at creating a single national AI framework and challenging state laws. But an executive order can't overturn existing state legislation on its own. That takes Congress or the courts.

For now, state laws are enforceable. The FTC is expected to publish a policy statement on AI in advertising that will give more clarity — but no one expects it to eliminate New York's AI ad disclosure requirement.

Plan for the state rules. Adjust if federal law eventually changes things.

What should you be doing right now?

Before June 2026, there are three things worth getting ahead of.

Audit your AI use in active campaigns. Know which assets involve AI-generated people. If you don't have that logged, start logging it now.

Define your disclosure format. You have flexibility in how you disclose, but you need a consistent approach before the law takes effect. Pick a format, test it across placements, and make it part of your creative spec.

Update your talent contracts. If you use real models or talent and have any plans to extend their work with AI, you need a separate consent document in place. This isn't covered by most existing agreements.

These aren't heavy lifts. They're workflow changes. But they're much easier to build in now than to retrofit after a campaign is already running.

FAQ

Does AI ad disclosure apply to every ad that uses AI?

No. The New York law is specifically about synthetic performers — AI-generated human-like figures. It doesn't apply to AI-assisted editing, background generation, colour grading, or copywriting. If there's no synthetic person in the ad, the AI ad disclosure requirement doesn't apply.

What happens if we don't comply?

A first violation draws a $1,000 civil penalty in New York. Subsequent violations are $5,000 each. Penalties are per violation, not per campaign.

What about AI-generated voices?

Voice is covered separately. Tennessee's ELVIS Act prohibits AI voice cloning of real individuals without consent. New York's synthetic performer law covers audiovisual assets, which includes voice when combined with a visual performance. If you're generating AI voices in ads, check which laws apply in each state where you're running.

Do platforms enforce this independently of the law?

Yes. Meta and Google both have AI content labelling policies that apply regardless of local law. Running a campaign that meets New York's standard but doesn't meet platform policy can still result in the ad being pulled.

What about the EU?

The EU AI Act's transparency requirements take effect in August 2026 and carry penalties of up to €35 million or 7% of global annual turnover. Any brand with EU exposure will need machine-readable watermarking and clear audience-facing AI ad disclosure built into their content pipeline before that date.

A quick reset

The rules are new but the underlying principle isn't.

Advertising has always required honesty about who's being shown and why. AI has made it easier to blur those lines — and regulation is catching up.

The agencies and brands that handle AI ad disclosure well won't be the ones waiting to see how enforcement plays out. They'll be the ones building good habits now, before anyone's asking them to.