The Dark Side of Open Source AI Image Generation

Open source tools enable anyone to create AI-generated art. They have also been utilized for the production of non-consensual deepfake pornography.

Concerns with Open Source AI Image Generators

Image credit: Reuven Cohen

Whether it’s the frowning high-definition face of a chimpanzee or a psychedelic, pink-and-red-hued doppelganger of himself, Reuven Cohen knows how to catch people’s attention using AI-generated images. But behind the captivating visuals lies a darker side of technology that many are unaware of.

Open source image-generation technology has unleashed a wave of freewheeling experimentation and creativity. However, it has also provided an avenue for the creation and distribution of explicit and nonconsensual images, particularly targeting women. This technology, which can be trained to be both gruesome and bad, has become a bedrock of deepfake porn and facilitates the development of salacious and harassing applications.

The open source community has made efforts to deter exploitative uses of AI, but controlling the free-for-all nature of open source software is near-impossible. Fake image abuse and nonconsensual pornography have flourished as a result, and it’s a problem that cannot be sugarcoated or qualified.

Henry Ajder, a researcher specializing in the harmful use of generative AI, asserts that open source image generation software has empowered the creation and dissemination of explicit and nonconsensual images. The widespread availability and lack of guardrails in open source models have made it challenging to prevent their misuse.

While some AI models are purpose-built for salacious and harassing uses, many tools can serve both legitimate and malicious purposes. For instance, a popular open source face-swapping program is used by professionals in the entertainment industry but is also the “tool of choice for bad actors” creating nonconsensual deepfakes.

Startups like Stability AI have developed high-resolution image generators with guardrails to prevent explicit image creation and malicious use. However, even when guardrails are in place, open sourcing the software can render those limitations bypassable. Smaller AI models, known as LoRAs, further enable customization and styling of image outputs, including sexual acts and celebrity likenesses.

The issue intensifies on forums like 4chan, notorious for unregulated content. Pages dedicated to nonconsensual deepfake porn exist, featuring openly available programs and AI models specifically designed for sexual images. Adult image boards are inundated with AI-generated nonconsensual nudes of real women, from porn performers to well-known actresses like Cate Blanchett.

Some communities within the AI image-making sphere have begun to push back against the proliferation of pornographic and malicious content. The creators themselves worry about their software gaining a reputation for NSFW images and urge others to report explicit images and images of minors.

To compound matters, new methods like InstantID make it even easier to create deepfakes. A technique developed by researchers at Peking University and Xiaohongshu, InstantID allows for face swaps in images with just a single example, reducing processing and preparation requirements. Although the researchers expressed concerns about the model’s potential for offensive or culturally inappropriate imagery, others promote it as an enabling technology for “Uncensored Open Source Face Cloning.”

The ease of creating deepfakes using these methods raises significant concerns. Cohen points out that anyone could create a fake of someone in a compromising position, making it simple and accessible to everyone. The potential for misuse is alarming.

While some tool and software creators discourage malicious use, they often feel powerless to prevent it. Others are actively creating hurdles to unwanted use cases, promoting ethical tools for AI, such as image “guarding” to protect images from generative AI editing and control access to uploaded models.

Techno-fixes like release-gating, licensing of open source software, and contractual obligations for commercial platforms may not completely eliminate misuse, but they could serve as deterrents. Grassroots efforts and the establishment of community norms and standards can also influence behavior and shape what is considered acceptable.

To combat the issue of nonconsensual AI porn, collaboration between AI startups, open source developers, entrepreneurs, governments, women’s organizations, academics, and civil society is crucial. A more coordinated approach can help deter abuse and make individuals more accountable. It’s about creating a community online that actively works against the spread of explicit and nonconsensual content.

The consequences of image-based abuse are often devastating, and it disproportionately affects women. Studies have shown that deepfakes are overwhelmingly nonconsensual pornography, aimed primarily at women. The ability for random individuals to target and exploit others is terrifying, and it is a concern that demands attention and action.

As we navigate the ever-evolving landscape of AI and technology, we must remain vigilant in addressing the ethical implications and potential for misuse. It’s essential to strike a balance between innovation and responsible use to create a world that is safe for everyone.


Q&A

Q: Can open source image generation software be controlled?

A: Controlling open source image generation software is challenging due to its decentralized nature. While efforts are made to deter misuse, the open source free-for-all makes it nearly impossible to maintain full control over how the technology is utilized. However, incorporating techno-fixes and establishing community norms can help mitigate some of the issues.

Q: Are there any measures in place to prevent the creation and distribution of explicit and nonconsensual images?

A: Some startups and online communities are actively discouraging malicious use and creating tools to combat unwanted applications of AI image generation. Ethical tools, such as image “guarding,” are being developed to protect images from generative AI editing. However, the effectiveness of these measures depends on the willingness of users to adhere to guidelines and the ability to enforce compliance.

Q: How can we address the issue of nonconsensual AI porn?

A: Addressing the issue of nonconsensual AI porn requires collaboration between various stakeholders, including AI startups, open source developers, governments, women’s organizations, academics, and civil society. By fostering dialogues and creating partnerships, possible deterrents can be explored without hindering accessibility to open source models. Collaboration is key to creating a safe online community.

Q: What are the potential consequences of nonconsensual AI porn?

A: The consequences of nonconsensual AI porn can be devastating, particularly for the individuals targeted. The psychological and emotional impact can be severe, leading to long-term trauma and damage. It is essential to recognize the harmful effects and implement measures to prevent further harm.


🔗References: 1. Please stop anthropomorphizing AI, treating it as an autonomous life form 2. Quickly Access Recently Viewed Files & Folders in macOS 3. OpenAI’s Policies Got a Quiet Update, Removing the Ban on Military Warfare Applications 4. Google’s AI Image Generator Finally Rolls out to the Public. Here’s How to Try It 5. Substack Commits to Proactively Removing Nazi Content, Ensuring Fallout for TechCrunch 6. Baidu denies ties to reported Chinese military training with GenAI chatbot 7. Decentralized social network Farcaster is trying to reach mass adoption with web 2.0 techniques 8. Samsung chairman Lee acquitted of financial crimes as merger of companies 9. Decoding Reddit’s plan to ‘train’ AI models 10. Lawmakers revise Kids Online Safety Act to address LGBTQ advocates’ concerns