Published on
Artificial intelligence (AI)-generated videos that show young girls in revealing clothing or positions have gained millions of likes and shares on TikTok despite banning such content, according to a report.
The Spanish fact-checking organization, Maldita, find more than 20 accounts on the platform that posted more than 5,200 videos showing young girls wearing bikinis, school uniforms and tight clothing. In total, these accounts already have more than 550,000 subscribers and nearly 6 million likes.
In the comments are links to external platforms such as Telegram communities that sell child pornography, according to the analysis. Maldita said she reported the 12 Telegram groups found in her study to Spanish police.
The accounts also generate profits by selling AI-generated videos and images through TikTok’s subscription service, which pays creators a monthly fee to access their content. The platform makes about 50 percent of profits through this model, according to its agreement with the creators.
The report comes as countries around the world, including Australia, Denmark and the European Union, have imposed or are discussing social media restrictions for users under 16 as a way to keep young people safe online.
TikTok requires content creators to indicate when AI was used in the creation of a video. Content may also be removed from the social media platform if it is deemed “harmful to individuals,” according to their community. guidelines.
However, the Maldita report found that most of the videos analyzed did not have any watermarks or other identification indicating that AI was used in them.
Some videos had a “TikTok AI Alive» watermark that is automatically used to take still images and turn them into videos within the platform.
In a statement to Euronews Next, Telegram and TikTok said they were “fully committed” to preventing child sexual abuse content on their platforms.
Telegram analyzes all media uploaded to its public platform and compares it to child pornography materials already removed from the platform to prevent their spread.
“The fact that criminals have to use private groups and another platform’s algorithms to thrive is proof of the effectiveness of Telegram’s own moderation,” the statement said.
Telegram said it removed more than 909,000 groups and channels containing child pornography in 2025.
As for TikTok, it says that 99% of content harmful to minors is removed automatically and that 97% of offensive content generated by AI has also been removed proactively.
The platform says it responds immediately to remove or close accounts that share sexually explicit content involving children and reports it to the US National Center for Missing and Exploited Children (NCMEC).
TikTok also said CNN that it removed more than 189 million videos and banned more than 108 million accounts between April and June 2025.
This story has been updated with comments from Telegram and TikTok.