Outcry over AI images in drama series

A Chinese short drama series suspected of using artificial intelligence to obtain people's facial data has sparked public outcry in recent days, prompting regulators and legal experts to stress that technological advancement must not infringe on personal rights.

In late March, several bloggers — including traditional Chinese attire enthusiasts and models — accused the popular AI-generated series Peach Blossom Hairpin of using technology to replicate their facial features, outfits and makeup without consent to create characters that were widely distributed on short-video platforms.

By then, the series had already garnered more than 40 million views on Hongguo, a micro-drama platform, and some of the alleged victims said they were preparing to take legal action.

On Friday, Hongguo said on its official WeChat account that the series had been removed and that no new content would be uploaded for 15 days because its creator had failed to provide sufficient proof of compliance with regulations governing facial imagery.

The platform said adherence to legal and regulatory standards is a non-negotiable baseline, but noted that short dramas, as a new form of creative product, present major challenges for content review, particularly with the rise of AI tools.

It pledged to strengthen content review processes, upgrade verification technologies and improve authorization procedures to foster a more regulated environment for content creation and distribution.

Although the producer has not confirmed whether bloggers' photos were used as templates for AI generation, legal experts said such actions could still constitute infringement.

Zhao Zhanling, a lawyer at Beijing Javy Law Firm, said under the Civil Code and legal practice, if an AI-generated face leads the public to associate it with a specific individual, it may constitute infringement.

"Copying a person's image and processing it with AI is a typical example of using information technology to violate someone's portrait rights," Zhao said.

As AI technology becomes more prevalent in the film and television industry, similar cases of AI-powered face and voice swapping have become increasingly frequent.

Last month, the Beijing Internet Court disclosed a case in which an actress' images were misused by two companies using AI face-swapping technology in a short drama.

The court ruled in favor of the actress, ordering the defendants to issue a public apology and compensate her for financial losses.

In another case, the court supported a voice-over artist, ruling that using AI to imitate someone's voice without permission constitutes infringement of voice rights.

"The advancement of AI has facilitated creative production but has also been exploited for infringing activities," said Ma Xiangxiang, a lawyer at the Anjie Broad Law Firm.

She noted that regulators in China have begun addressing the illegal use of such technology, particularly in AI-driven face swapping in short videos.

On Thursday, the performers' committee of the China Federation of Radio and Television Associations issued a statement condemning the unauthorized use of actors' images and voices through AI face swapping, voice cloning and unauthorized editing or remixing.

The committee said any content that can be linked to specific actors — whether through AI-generated lookalikes, imitated voices, face-swapped dramas, commercial use, virtual replicas or derivative works — does not avoid liability, regardless of labeling.

On Sunday, the studio of Yi Yangqianxi said AI-generated dramas using the actor's likeness without permission had been circulating online.

The actor has not appeared in such productions nor authorized any third party to use his image for AI synthesis, the studio said, adding that it had engaged lawyers.

Zhao said pursuing legal remedies is important but noted that it is more difficult for ordinary individuals to identify infringement, as AI-generated content often draws on large datasets.

"Producers frequently claim that any resemblance is purely coincidental, making it harder to prove recognizability," he said. "Additionally, the costs of legal action — including evidence collection, notarization and litigation — can be prohibitively high."

He advised individuals who discover unauthorized AI-generated content using their likeness to immediately record or take a screenshot of the material and preserve evidence, preferably through blockchain methods.

He added that filing complaints with hosting platforms is a faster and more affordable way to seek remedies. Ma cited the Civil Code, which requires platforms to take necessary measures — such as removal, blocking or disconnection of links — once notified of infringing content.

Upon receiving such notice, platforms must promptly forward it to the alleged infringer and take appropriate action based on preliminary evidence and the nature of the service, she said.

She also called for a stronger legal framework to further regulate AI applications, thereby ensuring data security, intensifying personal information and minor protection and serving the healthy development of the digital economy.

[email protected]

Copyright @Kunming Information Hub 2019. All Rights Reserved. E-mail:[email protected]