China has taken a significant step toward regulating the rapidly expanding market for AI-generated digital humans, releasing draft rules that would impose mandatory disclosure requirements, strict consent protocols for biometric data, and specific protections for minors. The proposed framework represents one of the world's first comprehensive attempts to govern synthetic media that blurs the line between human and artificial interaction.
What the Draft Rules Propose
The regulations, published for public comment, target what the document defines as "digital humans"—software-generated personas that can look, speak, and interact like real people through technologies like deepfakes, generative AI, and real-time animation. These synthetic entities have become increasingly common in customer service, entertainment, sales, and education across Chinese platforms.
The core provisions focus on three key areas:
1. Mandatory Disclosure and Labeling
All content featuring digital humans must carry "clear labels" indicating the synthetic nature of the entity. This applies across distribution channels including websites, applications, and smart devices. The requirement aims to eliminate user confusion about whether they're interacting with a real person or an AI-generated simulation.
2. Biometric Consent Requirements
Companies are prohibited from using an individual's face, voice, or other personal biometric data to create digital human representations without obtaining explicit permission. This extends to both living individuals and deceased persons, whose data cannot be used without family consent.
3. Child Protection Measures
The most specific restrictions target minors:
- Complete ban on "virtual intimate relationship services" for users under 18
- Prohibition against designs that could "mislead minors or pull them into compulsive use"
- Restrictions on digital human content that might encourage unhealthy social comparison or excessive spending
Technical Implementation Challenges
While the regulatory intent is clear, implementation presents technical hurdles. The rules don't specify:
- Technical standards for labeling (watermarking, metadata, visual indicators)
- Verification mechanisms to ensure compliance
- Enforcement procedures for cross-platform content
- Distinctions between different digital human technologies (2D avatars vs. 3D photorealistic models)
Platforms like Douyin (China's TikTok), Alibaba's customer service bots, and Tencent's virtual influencers would need to retrofit existing systems with disclosure mechanisms. The consent requirements for biometric data would particularly affect companies offering "digital twin" services where users can create personalized avatars.
Global Context and Precedents
China's move follows similar regulatory discussions in other jurisdictions:
- European Union: The AI Act includes transparency requirements for AI systems interacting with humans
- United States: Several states have proposed deepfake disclosure laws, though none specifically target digital humans
- South Korea: Has implemented some disclosure requirements for AI-generated content in media
What distinguishes China's approach is its specificity toward "digital humans" as a distinct category and its explicit focus on child protection in virtual relationships—a response to growing concerns about AI companions and their psychological effects on young users.
Market Impact and Industry Response
The digital human market in China has seen explosive growth, with estimates suggesting it could reach ¥270 billion (approximately $37 billion) by 2025 according to industry reports. Major players include:
- Baidu: Offers digital human creation platforms for enterprises
- Alibaba: Deploys virtual customer service agents across its ecosystem
- ByteDance: Develops virtual influencers for content creation
- Startups: Companies like Shadow Factory and Xiaoice create hyper-realistic digital humans
Industry groups have expressed cautious support for the regulations while seeking clarification on implementation details. The consent requirements could slow development cycles but might also build user trust—a critical factor for adoption in sensitive applications like healthcare and education.
gentic.news Analysis
This regulatory proposal represents a logical next step in China's evolving AI governance framework, which has progressed from general principles to increasingly specific domain regulations. It follows China's 2023 interim measures for generative AI services, which required watermarking of AI-generated content but didn't specifically address interactive digital humans.
The timing is significant—coming just months after several Chinese tech companies faced public criticism for deploying customer service bots that users couldn't distinguish from human agents. This aligns with our previous coverage of Alibaba's "AI Customer Service Scandal" in November 2025, where users reported frustration with undisclosed automated systems.
From a technical perspective, the regulations create both constraints and opportunities. While compliance will require additional engineering (likely through standardized APIs for disclosure), it also establishes clearer boundaries for ethical development. Companies that successfully implement transparent systems may gain competitive advantage in trust-sensitive markets like finance and healthcare.
The child protection measures are particularly noteworthy, reflecting growing global concern about AI's psychological impacts. Research from Stanford's Human-Centered AI Institute (covered in our January 2026 analysis) has shown concerning attachment patterns between teens and AI companions. China's outright ban on virtual intimate services for minors is more aggressive than approaches in Western countries, which have focused more on age verification than service prohibition.
Looking forward, these regulations could influence global standards much as China's data protection laws have. If major platforms successfully implement the labeling requirements, we may see similar approaches adopted by international coalitions like the Partnership on AI. However, the effectiveness will depend on enforcement—without robust verification systems, labels could become mere formalities.
Frequently Asked Questions
What exactly is a "digital human" under these rules?
A digital human is defined as a software-generated entity that can look, speak, and interact like a real person through AI technologies. This includes 2D and 3D avatars, deepfake videos, real-time animated characters, and voice assistants with human-like personas used in customer service, entertainment, or social interaction.
How will the mandatory labeling work in practice?
The draft rules don't specify technical implementation details, but likely approaches include visual watermarks (similar to "Sponsored" labels on social media), metadata tagging, audible disclosures in voice interactions, or dedicated interface elements. Platforms will need to develop standardized methods that work across different content types while remaining noticeable but not overly intrusive.
Do these rules apply to international companies operating in China?
Yes, the regulations would apply to any company offering digital human services to users in China, regardless of where the company is headquartered. This follows the pattern of China's existing internet regulations, which require compliance from both domestic and foreign platforms serving Chinese users.
What happens if companies violate these rules?
While penalty details aren't specified in the draft, similar Chinese regulations typically include fines, suspension of services, and in severe cases, revocation of operating licenses. The final version will likely specify tiered penalties based on violation severity, with particular emphasis on violations involving minors or non-consensual use of biometric data.








