ASCI Releases Draft Guidelines for Responsible Labelling of AI-Generated Content in Advertising
The Advertising Standards Council of India (ASCI) released draft guidelines for the responsible labelling of synthetically generated content in advertising. The guidelines are aligned with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules (2026), amended on February 10, to ensure transparency while avoiding consumer label fatigue around synthetically-generated information.
As brands increasingly turn to Artificial Intelligence (AI) to power their campaigns, the draft guidelines advocate a risk-based approach, focusing on consumer outcomes rather than regulating the technology itself. AI use in advertising is considered misleading or harmful only when it creates unfulfillable expectations, exploits vulnerable populations, depicts unsafe situations, or replicates a real person's likeness without consent.
The requirement to label AI-generated content is based on the risk it
poses to consumers. The guidelines classify AI-generated advertising content
into three risk categories:
1. High Risk (Prohibited Content): High-risk advertisements are those that are illegal, infringe on rights, make misleading claims, or violate the ASCI Code. These will violate the code even if an AI label is used. Examples include:
- Fabricating endorsements or testimonials
- Exaggerating product results or features through claims or visual
representations to create a misleading impression
- Creating fake locations that appear realistic to the consumer
- Using deepfakes, copyrighted work or a person’s likeness without
consent
- Using AI to generate fictional authority figures with identifiable cues, such as an AI-generated fake doctor promoting a supplement, implying medical endorsement/expertise, etc
2. Medium Risk
(Labelling Required): Medium-risk
advertisements are those where AI use materially influences consumer decisions,
and the lack of disclosure would mislead consumers. Labelling is mandatory in
these cases to help consumers understand the nature of the representation.
Examples include:
- Using virtual or synthetically generated influencers and ambassadors
- Replicating a real person’s likeness or voice even with their
consent for personalised messaging
- Using synthetically generated visuals for product performance unless
the visuals replicate how the product actually performs
- Creating realistic events, settings or situations entirely with AI
- Demonstrating a product that does not currently exist
- Creating AI-generated exaggerated sound effects that are highly
relevant to the product’s core features
- Using AI for paid or sponsored product suggestions, which must specifically be labelled as ‘sponsored by’
3. Low Risk (No
Labelling Required): Low-risk
advertisements feature minor modifications or use AI in ways that have no
material impact on a consumer’s ability to make an informed choice. No label is
required for:
- Minor
Enhancements: Routine editing, colour correction, noise reduction, standard
blemish removal, and minor lighting tweaks that do not alter the substance
or core claims of the ad
- Background
and Ambient Elements: Purely decorative AI-generated
backgrounds, abstract skylines, ambient music, jingles, or background
sound effects (like crowd cheers) that are unrelated to the product’s
actual capabilities or promise
- Fantastical
Elements: Obvious, unrealistic effects that audiences recognise as not
depicting reality, such as dragons or fairies
- Administrative and Text Uses: Generating or enhancing advertising copy, creating audio descriptions for the visually impaired, or preparing documents in good faith without creating false records.
ASCI invites feedback from industry, consumer groups and other
stakeholders by June 13, 2026, after which the guideline finalisation process
will begin. Stakeholders can submit their feedback at contact@ascionline.in.
Comments
Post a Comment