Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

China Proposes Mandatory Labels, Consent Rules for AI Digital Humans

China Proposes Mandatory Labels, Consent Rules for AI Digital Humans

China has proposed its first legal framework specifically targeting AI-generated digital humans, requiring mandatory disclosure labels, explicit consent for biometric data, and strict child-safety measures including bans on virtual intimate services for users under 18.

GAla Smith & AI Research Desk·15h ago·6 min read·6 views·AI-Generated
Share:
China Proposes First Legal Framework for AI Digital Humans, Mandating Labels and Consent

China has taken a significant step toward regulating the rapidly expanding market for AI-generated digital humans, releasing draft rules that would impose mandatory disclosure requirements, strict consent protocols for biometric data, and specific protections for minors. The proposed framework represents one of the world's first comprehensive attempts to govern synthetic media that blurs the line between human and artificial interaction.

What the Draft Rules Propose

The regulations, published for public comment, target what the document defines as "digital humans"—software-generated personas that can look, speak, and interact like real people through technologies like deepfakes, generative AI, and real-time animation. These synthetic entities have become increasingly common in customer service, entertainment, sales, and education across Chinese platforms.

The core provisions focus on three key areas:

1. Mandatory Disclosure and Labeling

All content featuring digital humans must carry "clear labels" indicating the synthetic nature of the entity. This applies across distribution channels including websites, applications, and smart devices. The requirement aims to eliminate user confusion about whether they're interacting with a real person or an AI-generated simulation.

2. Biometric Consent Requirements

Companies are prohibited from using an individual's face, voice, or other personal biometric data to create digital human representations without obtaining explicit permission. This extends to both living individuals and deceased persons, whose data cannot be used without family consent.

3. Child Protection Measures

The most specific restrictions target minors:

  • Complete ban on "virtual intimate relationship services" for users under 18
  • Prohibition against designs that could "mislead minors or pull them into compulsive use"
  • Restrictions on digital human content that might encourage unhealthy social comparison or excessive spending

Technical Implementation Challenges

While the regulatory intent is clear, implementation presents technical hurdles. The rules don't specify:

  • Technical standards for labeling (watermarking, metadata, visual indicators)
  • Verification mechanisms to ensure compliance
  • Enforcement procedures for cross-platform content
  • Distinctions between different digital human technologies (2D avatars vs. 3D photorealistic models)

Platforms like Douyin (China's TikTok), Alibaba's customer service bots, and Tencent's virtual influencers would need to retrofit existing systems with disclosure mechanisms. The consent requirements for biometric data would particularly affect companies offering "digital twin" services where users can create personalized avatars.

Global Context and Precedents

China's move follows similar regulatory discussions in other jurisdictions:

  • European Union: The AI Act includes transparency requirements for AI systems interacting with humans
  • United States: Several states have proposed deepfake disclosure laws, though none specifically target digital humans
  • South Korea: Has implemented some disclosure requirements for AI-generated content in media

What distinguishes China's approach is its specificity toward "digital humans" as a distinct category and its explicit focus on child protection in virtual relationships—a response to growing concerns about AI companions and their psychological effects on young users.

Market Impact and Industry Response

The digital human market in China has seen explosive growth, with estimates suggesting it could reach ¥270 billion (approximately $37 billion) by 2025 according to industry reports. Major players include:

  • Baidu: Offers digital human creation platforms for enterprises
  • Alibaba: Deploys virtual customer service agents across its ecosystem
  • ByteDance: Develops virtual influencers for content creation
  • Startups: Companies like Shadow Factory and Xiaoice create hyper-realistic digital humans

Industry groups have expressed cautious support for the regulations while seeking clarification on implementation details. The consent requirements could slow development cycles but might also build user trust—a critical factor for adoption in sensitive applications like healthcare and education.

gentic.news Analysis

This regulatory proposal represents a logical next step in China's evolving AI governance framework, which has progressed from general principles to increasingly specific domain regulations. It follows China's 2023 interim measures for generative AI services, which required watermarking of AI-generated content but didn't specifically address interactive digital humans.

The timing is significant—coming just months after several Chinese tech companies faced public criticism for deploying customer service bots that users couldn't distinguish from human agents. This aligns with our previous coverage of Alibaba's "AI Customer Service Scandal" in November 2025, where users reported frustration with undisclosed automated systems.

From a technical perspective, the regulations create both constraints and opportunities. While compliance will require additional engineering (likely through standardized APIs for disclosure), it also establishes clearer boundaries for ethical development. Companies that successfully implement transparent systems may gain competitive advantage in trust-sensitive markets like finance and healthcare.

The child protection measures are particularly noteworthy, reflecting growing global concern about AI's psychological impacts. Research from Stanford's Human-Centered AI Institute (covered in our January 2026 analysis) has shown concerning attachment patterns between teens and AI companions. China's outright ban on virtual intimate services for minors is more aggressive than approaches in Western countries, which have focused more on age verification than service prohibition.

Looking forward, these regulations could influence global standards much as China's data protection laws have. If major platforms successfully implement the labeling requirements, we may see similar approaches adopted by international coalitions like the Partnership on AI. However, the effectiveness will depend on enforcement—without robust verification systems, labels could become mere formalities.

Frequently Asked Questions

What exactly is a "digital human" under these rules?

A digital human is defined as a software-generated entity that can look, speak, and interact like a real person through AI technologies. This includes 2D and 3D avatars, deepfake videos, real-time animated characters, and voice assistants with human-like personas used in customer service, entertainment, or social interaction.

How will the mandatory labeling work in practice?

The draft rules don't specify technical implementation details, but likely approaches include visual watermarks (similar to "Sponsored" labels on social media), metadata tagging, audible disclosures in voice interactions, or dedicated interface elements. Platforms will need to develop standardized methods that work across different content types while remaining noticeable but not overly intrusive.

Do these rules apply to international companies operating in China?

Yes, the regulations would apply to any company offering digital human services to users in China, regardless of where the company is headquartered. This follows the pattern of China's existing internet regulations, which require compliance from both domestic and foreign platforms serving Chinese users.

What happens if companies violate these rules?

While penalty details aren't specified in the draft, similar Chinese regulations typically include fines, suspension of services, and in severe cases, revocation of operating licenses. The final version will likely specify tiered penalties based on violation severity, with particular emphasis on violations involving minors or non-consensual use of biometric data.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

China's digital human regulations represent a significant maturation of AI governance—moving from abstract principles to concrete, enforceable requirements for a specific technology category. This follows the pattern we've observed in China's regulatory approach: rapid deployment of new technologies followed by targeted regulation once adoption reaches critical mass and societal concerns emerge. The consent requirements for biometric data are particularly significant given China's previous controversies around facial recognition. By requiring explicit permission for using faces or voices in digital humans, regulators are attempting to balance innovation with individual rights—a challenging equilibrium that Western regulators are also grappling with. From a technical implementation perspective, the most interesting challenge will be developing labeling systems that work across different modalities (visual, auditory, interactive) without degrading user experience. This could spur innovation in subtle but effective disclosure mechanisms—perhaps through haptic feedback, specific color schemes, or standardized auditory cues. The child protection measures, while ethically justified, present definitional challenges. What constitutes a "virtual intimate relationship service" versus legitimate therapeutic or educational interaction? How will platforms verify age without compromising privacy? These implementation questions will determine whether the regulations achieve their protective goals or simply drive such services underground. Compared to the EU's broader AI Act approach, China's domain-specific regulation allows for more tailored requirements but risks creating a patchwork of rules as new AI applications emerge. We're likely to see similar domain-specific regulations for autonomous vehicles, medical AI, and educational AI in the coming year.
Enjoyed this article?
Share:

Related Articles

More in Products & Launches

View all