Home Technology Meta Struggles to Control Spread of Sexualized AI Deepfake Celebrity Images on Facebook

Meta Struggles to Control Spread of Sexualized AI Deepfake Celebrity Images on Facebook

by Good Morning US Team
Meta struggles to control spread of sexualized ai deepfake celebrity

Meta Removes Deepfake Images of Celebrities Amid Growing Concerns

In a response to a CBS News investigation, Meta has taken action by removing over a dozen fraudulent, sexualized images of prominent female figures, including actors and athletes, that had been widely circulated on its Facebook platform. This investigation revealed a concerning prevalence of AI-manipulated deepfake images depicting celebrities.

Targeted Celebrities

The investigation highlighted that numerous fake images portraying notable figures such as Miranda Cosgrove, Jennette McCurdy, Ariana Grande, Scarlett Johansson, and tennis star Maria Sharapova had gained significant traction on Facebook, amassing hundreds of thousands of engagements.

Meta’s Response

Erin Logan, a spokesperson for Meta, stated, “We’ve removed these images for violating our policies and will continue monitoring for other violating posts. This is an industry-wide challenge, and we’re continually working to improve our detection and enforcement technology.” The company acknowledged the growing issue of non-consensual deepfake content and affirmed its commitment to enforcing existing community standards, including banning “derogatory sexualized photoshop or drawings.”

Investigation Findings

The analysis by Reality Defender, a platform designed to identify AI-generated content, indicated that most of the flagged images were, in fact, deepfakes, where AI created underwear-clad bodies of celebrities in otherwise authentic photographs. A few images were also identified as being created through traditional image editing techniques.

Ben Colman, co-founder and CEO of Reality Defender, remarked, “Almost all deepfake pornography does not have the consent of the subject being deepfaked. Such content is growing at a dizzying rate, especially as existing measures to stop such content are seldom implemented.”

Ongoing Challenges

Despite Meta’s removal of certain images, CBS News found numerous remaining instances of AI-generated sexualized images of Cosgrove and McCurdy still accessible on the platform, highlighting gaps in Meta’s content moderation efforts. One particularly troubling deepfake of Cosgrove was still online, shared by an account with 2.8 million followers.

Recommendations from the Oversight Board

Meta’s Oversight Board, which provides non-binding recommendations for content moderation, expressed concern over the sufficiency of current regulations against deepfake pornography. They have advocated for clearer definitions in Meta’s policies to expressly include terms like “non-consensual” regarding manipulated content and proposed that such prohibitions be integrated into the company’s Adult Sexual Exploitation guidelines for stricter enforcement.

Michael McConnell, co-chair of the Oversight Board, stated, “The Board has made clear that non-consensual deepfake intimate images are a serious violation of privacy and personal dignity, disproportionately harming women and girls. These images are not just a misuse of technology — they are a form of abuse that can have lasting consequences.” The Board is actively monitoring Meta’s actions and is pushing for stronger safeguards and faster enforcement of regulations.

Industry-Wide Implications

Meta is not alone in grappling with the issue of sexualized deepfake content, as other platforms also face similar challenges. For instance, X (formerly Twitter) temporarily disabled searches related to Taylor Swift in response to the circulation of AI-generated explicit content featuring her.

A recent study from the U.K. government predicts that the number of deepfake images on social media will surge to 8 million this year, a significant increase from 500,000 just a year prior.

Conclusion

The evolving landscape of digital media, combined with the advent of AI technologies, continues to pose significant challenges for social media companies in terms of content moderation and policy enforcement. As platforms like Meta work to improve their policies and technology, the industry must address the intricate issues surrounding consent, privacy, and ethics in the age of deepfakes.

Source link

You may also like

About Us

A (1)

At Good Morning US, we believe that every day brings a new opportunity to stay informed, engaged, and inspired. Our mission is to provide comprehensive coverage of the events that matter most to Americans.

Featured Posts

Most Viewed Posts

Copyright ©️ 2024 Good Morning US | All rights reserved.