Tech

‘Nudify’ bots to create naked AI images in seconds rampant on Telegram

Online chatbots are generating nude images of real people at users’ requests, prompting concern from experts who worry the explicit deepfakes will create “a very nightmarish scenario.”

A Wired investigation on the messaging app Telegram unearthed dozens of AI-powered chatbots that allegedly “create explicit photos or videos of people with only a couple clicks,” the outlet reported. Some “remove clothes” from images provided by users, according to Wired, while others say they can manufacture X-rated photos of people engaging in sexual activity.

The outlet estimated that approximately 4 million users per month take advantage of the deepfake capabilities from the chatbots, of which there were an estimated 50. Such generative AI bots promised to deliver “anything you want about the face or clothes of the photo you give me,” Wired reported.

Experts worry the explicit deepfakes will create “a very nightmarish scenario.” prima91 – stock.adobe.com

“We’re talking about a significant, orders-of-magnitude increase in the number of people who are clearly actively using and creating this kind of content,” deepfake expert Henry Ajder, who was one of the first to discover the underground world of explicit Telegram chatbots four years ago, told Wired.

“It is really concerning that these tools — which are really ruining lives and creating a very nightmarish scenario primarily for young girls and for women — are still so easy to access and to find on the surface web, on one of the biggest apps in the world.”

While celebrities have fallen victim to the rise of pornographic deepfakes — from Taylor Swift to Jenna Ortega — there have also been recent reports of teen girls being used to create deepfake nude photos, some of which have been used in cases of “sextortion.” A recent survey even revealed that 40% of US students reported the circulation of deepfakes in their schools.

Deepfake sites have flourished amid advancements in AI technology, according to Wired, but have been met with intense scrutiny from lawmakers. In August, the San Francisco attorney’s office sued more than a dozen “undressing” websites.

The AI-powered chatbots allegedly “create explicit photos or videos of people with only a couple clicks.” eugenepartyzan – stock.adobe.com

On Telegram, bots can be used for translations, games and alerts — or, in this case, creating dangerous deepfakes. When contacted by Wired about the explicit chatbot content, the company did not respond with comment, but the bots and associated channels suddenly disappeared, although creators vowed to “make another bot” the next day.

“These types of fake images can harm a person’s health and well-being by causing psychological trauma and feelings of humiliation, fear, embarrassment, and shame,” Emma Pickering, the head of technology-facilitated abuse and economic empowerment at the UK-based domestic abuse organization Refuge, told Wired.

“While this form of abuse is common, perpetrators are rarely held to account, and we know this type of abuse is becoming increasingly common in intimate partner relationships.”

Elena Michale, the director and co-founder of the advocacy group #NotYourPorn, told Wired that it’s “concerning” how challenging it is “to track and monitor” applications on Telegram that could be promoting this type of explicit imagery.

Wired estimated that approximately 4 million Telegram users per month take advantage of the deepfake capabilities from the chatbots. Natee Meepian – stock.adobe.com

“Imagine if you were a survivor who’s having to do that themselves, surely the burden shouldn’t be on an individual,” she said. “Surely the burden should be on the company to put something in place that’s proactive rather than reactive.”

Non-consenual deepfake pornography has been banned in multiple states, but experts say Telegram’s terms of service are vague on X-rated content.

“I would say that it’s actually not clear whether nonconsensual intimate image creation or distribution is prohibited on the platform,” Kate Ruane, the director of the Center for Democracy and Technology’s free expression project, told Wired.

Earlier this year, Telegram CEO Pavel Durov was arrested and charged with facilitating child pornography, although he vowed “little has changed” in how his app operates and its privacy policy since his arrest.

In a recent statement, he claimed the platform routinely cooperated with law enforcement when requested to do so, vowing that the company does “not allow criminals to abuse our platform or evade justice.”

“Using laws from the pre-smartphone era to charge a CEO with crimes committed by third parties on the platform he manages is a misguided approach,” Durov wrote in a Telegram post.

“Building technology is hard enough as it is. No innovator will ever build new tools if they know they can be personally held responsible for potential abuse of those tools.”

“These types of fake images can harm a person’s health and well-being by causing psychological trauma and feelings of humiliation, fear, embarrassment, and shame,” Emma Pickering, the head of technology-facilitated abuse and economic empowerment at the UK-based domestic abuse organization Refuge, told Wired. eugenepartyzan – stock.adobe.com

Experts, however, say Telegram should be held responsible.

“Telegram provides you with the search functionality, so it allows you to identify communities, chats, and bots,” Ajder said.

“It provides the bot-hosting functionality, so it’s somewhere that provides the tooling in effect. Then it’s also the place where you can share it and actually execute the harm in terms of the end result.”

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button