by technology pretty much since the dawn of time. Almost as soon as the printing press was invented, erotica was being published. Photography was used for erotic purposes with glee by the Victorians. And, we all know how much the internet has influenced modern sexual culture.
Now that we’re grappling with the effect of AI on various sectors of society, what does that mean for sexuality? How are young people learning about sexuality, and how are people engaging in sexual activity, with AI as part of the picture? Some researchers are exploring these questions, but my research has indicated that there’s a bit of a shortage of analysis examining the real impacts of this technology on how people think and behave sexually. This is a huge topic, of course, so for today, I’d like to dig into the subject in two specific and related areas: distribution of information and consent.
Before we dive in, however, I’ll set the scene. What our general culture calls generative AI, which is what I will focus on here, involves software powered by machine learning algorithms that can create text, images, video, and audio that are synthetic, but which are difficult if not impossible to distinguish from organic content created by human beings. This content is so similar to organic content because the machine learning models are fed vast quantities of human-generated content during the training process. Because of the immense volumes of content required to train these models, all corners of the internet are vacuumed up to include in the training data, and this inevitably includes some content related to sexuality, in one way or another.
In some ways, we wouldn’t want to change this — if we want LLMs to have a thorough mapping of the semantics of English, we can’t just cut out certain areas of the language as we actually use it. Similarly, image and video generators are going to have exposure to nudity and sexuality because these are a significant portion of the images and videos people create and put online. This naturally creates challenges, because this content will then be reflected in model outputs from time to time. We implement guardrails, reinforcement learning, and prompt engineering to try and control this, but in the end generative AI is broadly as good at creating sexually expressive or explicit content as any other kind of content.
Nicola Döring and colleagues did a substantial literature review of studies addressing how usage of AI intersects with sexuality, and found users have four main ways of interacting with AI that have sexual components: Sexual Information and Education; Sexual Counseling and Therapy; Sexual and Romantic Relationships; and Erotica and Pornography. This intuitively probably sounds right to most of us. We’ve heard of at least a few of these sorts of phenomena relating to AI, whether in movies, TV, social media, or news content. Sexually explicit interaction is generally not allowed by mainstream LLM providers, but universally preventing it is impossible. Assorted other generative AI products as well as self-hosted models also make generating sexual content quite easy, and OpenAI has announced its intentions to go into the erotica/pornography business. Sexual content from generative AI has a tremendous amount of demand, so it appears that the market will provide it, one way or another.
It’s important that we remember that generative AI tools have no concept of sexual explicitness other than what we impart through the training process. Taboos and social norms are only part of the model insofar as human beings apply them in reinforcement learning or provide them in the training data. To the machine learning model, a sexually explicit image is the same as any other, and words used in erotica have meaning only in their semantic relationships to other words. As with many areas of AI, sexuality gets its meaning and social interpretations from human beings, not from the models.
Having sexual content available through generative AI is having significant effects on our culture, and it’s important for us to think about what that looks like. We want to protect the safety of individuals and groups and preserve people’s rights and freedoms of expression, and the first step to doing this is understanding the current state of affairs.
Information Sharing, Learning, and Education
Where do we learn about sexuality? We learn from observing the world around us, from asking questions, and from our own exploration and experiences. So, with generative AI starting to take on roles in various areas of life, what is the impact on what and how we learn about sexuality in particular?
In the most formal sense, generative AI is already playing a meaningful role in informal and private sex education, just as performing google searches and browsing websites did in the era before. Döring et al. noted that their research found that seeking out sexual health or educational information about sexuality online is quite common, for reasons that we can probably all relate to — convenience, anonymity, avoidance of judgment. Reliable statistics on how many people are using LLMs for this same kind of exploration is hard to come by, but it is reasonable to expect that the same advantages apply and would make it an appealing way to learn.
So, if this is happening, should we care? Is it particularly any different to learn about sexuality from google searches versus generative AI? Both sources have accuracy issues (anyone can put up content on the internet, after all), so what differentiates generative AI, if anything?
LLM as Source
When we use LLMs to find out information, the presentation of that content is quite different from when we do basic web searches. The results are presented in authoritative tone, and sourcing is often obscured unless we intentionally ask for it and vet it ourselves. As a result, what is being called “AI literacy” becomes important to effectively interpret and validate what the LLM is telling us.
If the individual using the LLM has this sophistication, however, scholars have found that basic factual information about sexual health is generally available from mainstream LLM offerings. The limited studies that have been done to date don’t find the quality or accuracy of sexual information from LLMs to be worse than that retrieved in general web searches, according to Döring et al. If this is the case, young people seeking important information to keep themselves safe and healthy in their sexual expression may have a valuable tool in generative AI. Because the LLM is more anonymous and interactive, users can ask the questions they really want to have answered and not be held back by fears of stigma or shame. But hallucinations continue to be an unavoidable problem with LLMs, resulting in occasional false information being served, so user skepticism and sophistication is important.
Content Bias
We must remember, however, that the perspective presented by the LLM is formed by the training processes used by the provider. That means that the company that created the LLM is embedding cultural norms and attitudes in the model, whether they really mean to or not. Reinforcement learning, a key part of training generative AI models, requires human users to make decisions about whether outputs are acceptable or not, and they are necessarily going to bring their own beliefs and attitudes to bear on those decisions, even implictly. When it comes to questions that are more of opinion, rather than fact, we are at the mercy of the choices made by the companies that created and provide access to LLMs. If these companies incentivize and reward more progressive or open-minded sexual attitudes during the reinforcement learning stages, then we can expect that to be reflected in LLM behavior with users. However, researchers have found that this means LLM responses to sexual questions can result in minimizing or devaluing sexual expression that is not “mainstream”, including LGBTQ+ perspectives.
In some cases, this takes the form of LLMs not being permitted to answer questions about sexuality or related topics, a concept called Refusal. LLM providers might simply ban the discussion of such topics from their product, which leaves the user looking for reliable information with nothing. But it also can insinuate to the user that the topic of sexuality is taboo, shameful, or bad — otherwise, why would it be banned? This puts the LLM provider in a difficult position, unquestionably — whose moral standards are they meant to follow? What kinds of sexual health questions should the chatbot respond to, and what’s the boundary? By entrusting sexual education to these kinds of tools, we’re accepting the opaque standard these companies choose, without actually knowing what it is or how it was defined.
Visual Content
But as I mentioned earlier, we don’t just learn about sexuality from asking questions and seeking facts. We learn from experience and observation as well. In this context, generative AI tools that create images and video become incredibly important for how young people understand bodies and sexuality. Döring et al. found a significant amount of implicit bias in the image generation offerings when tested.
“One strand of research on AI-generated information points to the risk that text- and image-generating AI tools will reinstate sexist, racist, ageist, ableist, heteronormative or other problematic stereotypes that are inscribed in the training data fed into the AI models. Such biases are easy to demonstrate such as when AI tools reaffirm cultural norms and stereotypes in their text and image outputs: Simply asked to create an image of “a couple” an AI image generator such as Midjourney (by Midjourney Inc.) will first present a young, able-bodied, normatively attractive, white, mixed-sex couple where the woman’s appearance is more sexualized than that of the man (as tested by the authors with Midjourney Alpha in June 2024).” — https://link.springer.com/article/10.1007/s11930-024-00397-y
As with the text generators, more sophisticated users can tune their prompting and select for the kinds of images they want to see, but if a user is not sure what they are looking for, or isn’t that skilled, this sort of interaction serves to further instill biases.
The Body
As an aside, it’s worth considering how AI-generated images may shape our understanding of bodies, in a sexual context or otherwise. There have been threads of conversation in our culture for decades about how internet-accessible pornography has distorted young people’s beliefs and expectations about how bodies should look and how sexual behavior should work. I think most analysis of those questions really isn’t that different whether you’re talking about the internet generally or generative AI.
The one area that does seem different, however, is in how generative AI can produce images and videos that appear photorealistic but display people in physically impossible or near-impossible ways. It takes unrealistic beauty standards to a new level. This can take the form of AI-based filters on real images, severely distorting the shapes and appearances of real people, or it can be products that create images or videos from whole cloth. We have moved past a time when airbrushing was the major concern, which would make small distortions of otherwise real bodies, into a time when the physically impossible or near-impossible is being presented to users as “normal” or the expected physical standard. For boys and girls alike, this creates a heavily distorted perspective on how our bodies and those of our intimate partners should appear and behave. As I’ve written about before, our increasing inability to tell synthetic from organic content has significantly damaging potential.
On that note, I’d also like to discuss a specific area where the norms and principles young people learn are profoundly important to ensuring safe, responsible sexual engagement throughout people’s lives — consent.
Consent
Consent is a tremendously important concept in our understanding of sexuality. This means, in short, that all parties involved in any kind of sexual expression or behavior readily, affirmatively agree throughout, and are under no undue coercion or manipulation. When we talk about sexual expression/behavior, this can include the creation or sharing of sexually explicit imagery of these parties, as well as physical interactions.
When it comes to generative AI, this spawns several questions, such as:
- If a real person’s image or likeness is used or produced by generative AI for sexual content, how do we know if that person consented?
- If that person didn’t consent to being the subject of sexual content, what are their rights and what are the obligations of the generative AI company and the generative AI user? And what are those obligations if they did consent to creating sexual content, but not in the generative AI context?
- How does it affect generative AI users’ understanding of consent when they can so easily acquire this kind of content through generative AI, without ever directly interacting with the individual/s?
What makes this different from older technologies, like airbrushing or photo editing? It’s a matter of degrees, in some ways. Deepfakes have existed since well before generative AI, where video editing could be applied to put someone else’s face into a porn scene or nude photo, but the ease, affordability, and accessibility of this technology has changed dramatically with the dawn of AI. Also, the increasing inability for average viewers to detect this artificiality is significant because knowing what is “real” is harder and harder.
Copyright and IP
This topic has a lot of common threads with copyright and intellectual property questions. Our society is already starting to grapple with questions of ownership of one’s own likeness, and what boundaries we are entitled to set on how our image is used. By and large, generative AI products have little to no effective restriction on how the images of public figures can be rendered. There are some perfunctory attempts to prevent image/video/audio generators from accepting explicit requests to create images (sexual or otherwise) of named public figures, but these are easily outwitted, and it seems to be of relatively minimal concern to generative AI companies, outside of complaints by large corporate interests. Scarlett Johansson has learned this from experience, and the recently released Sora 2 generates endless deepfake videos of public figures from throughout history.
This applies to people in the sex industry as well. Even if people are involved in sex work or creating erotica or pornography willingly, this doesn’t mean they are consenting to their work being usurped for generative AI creation — this is really no different from the issues of copyright and intellectual property being posed by authors, actors, and artists in mainstream sectors. Just because people create sexual content, this doesn’t make the claim to their rights any less valid, despite social stigma.
I don’t want to portray this as an indictment of all sexual content, or necessarily even sexual content generated by AI. There’s room for debate about when and how artificially generated pornography can be ethical, and certainly I think when consenting adult performers produce pornography organically there’s nothing wrong with that on the face of it. But these issues of consent and individual rights have not been adequately addressed, and these should make us all very nervous. Many people may not think much about the rights of creators in this space, but how we treat their claims legally may create precedents that cascade down to many other scenarios.
Sexual Abuse
However, in the space of sexuality, we must also consider wholly nonconsensually created content, which can cause tremendous harm. Instead of calling things “revenge porn”, scholars are beginning to use the term “AI-generated image-based sexual abuse” to refer to cases where people’s likenesses are used without their permission to generate sexual content, and I think this much better articulates the damage that can be done by this material. Considering this behavior sexual abuse rightly forces us to think more about the experiences of the victims. While image manipulation and fakery has always been somewhat possible, the latest generative AI makes this more achievable, more accessible, and cheaper than ever before, so it makes performing this sort of sexual abuse much more convenient to abusers. It’s important to note that the degree or severity of this abuse is not necessarily defined by the publicness or damage to the victim’s reputation — it’s not important whether people believe that the deepfake or sexual content is real. Victims can still feel deeply violated and traumatized by this material being created about them, regardless of how others feel about it.
Major LLM providers have, to date, held the line on sexual text content being produced by their products (to greater or lesser degrees of success, as Lai 2025 found), but OpenAI’s impending move into erotica means that this will be changing. While text communication has less potential for seriously damaging abuse than visual content, ChatGPT does engage in some multimodal content generation, and we can still imagine scenarios where a user instructs an LLM to produce erotica in the voice or style of real people, and the real people being mimicked could understandably find this upsetting. When OpenAI announced the move, they discussed some safety issues but these were entirely considerations about the users (mental health issues, for example) and did not speak to the safety of nonconsenting individuals whose likenesses could be involved. I think this is a major oversight that needs more attention if we can possibly hope to make such a product offering safe.
Learning about Consent
Beyond the immediate damage to victims of sexual abuse and the IP and livelihood harms to creators whose content is used for these applications, I think it’s also important to consider what lessons users absorb from generative AI being able to create likenesses at will, particularly in sexual contexts. When we are given the ability to so readily create someone else’s image in whatever form, whether it’s a historical figure pitching someone’s software product, or that same historical figure being represented in a sexual situation, the inherent lesson is that that person’s likeness is fair game. Legal nuances aside (which do need to be taken into account) we’re specifically asserting that getting someone’s approval to engage with them sexually is not important, at least when digital technology is involved.
Imagine how young people are receiving the implicit messages from this. Kids know they will get in trouble for sharing other people’s nudes, sometimes with severe legal consequences, but at the same time, there’s an assortment of apps letting them create fake ones, even of real people, with a click of a button. How do we explain the difference and help young people learn about the real harm they may be causing even just sitting in front of a screen alone? We have to start thinking about our bodily autonomy in the digital space as well as the physical space, because so much of our lives are carried out in the digital context. Deepfakes are not inherently less traumatizing than sharing of organic nude photos, so why aren’t we talking about this functionality as a social risk kids need to be educated on? The lessons we want young people to learn about the importance of consent are pretty directly contradicted by the generative AI sphere’s approach to sexual content.
Conclusion
You might reasonably end this asking, “So, what do we do?” and that’s a really hard question. I don’t believe we can effectively prevent generative AI products from producing sexual content, because the training data just includes so much of that material — this is reflective of our actual society. Also, there’s a clear market for sexual content from generative AI and some companies will always arise to fill that need. I also don’t think LLMs should forbid responding to sexual questions, where people may be looking for information to help understand sexuality, human development, sexual health, and safety, because this is so important for everyone, particularly youth, to have access to.
But at the same time, the hazards around sexual abuse and nonconsensual sexual content are serious, as are the unrealistic expectations and physical standards being set implicitly. Our legal systems have proven pretty inept at dealing with internet crime over the past decades, and this image-based sexual abuse is no exception. Prevention requires education, not just about the facts and the law, but about the impact that deepfake sexual abuse can have. We also need to give counter-narratives to the distortions of physical form that generative AI creates, if we want young people to have healthy relationships with their own bodies and with partners.
Beyond the broad social responsibilities of all of us to participate in the project of effectively educating youth, it’s the responsibility of generative AI product developers to consider risk and harm mitigation as much as they consider profit goals or user engagement targets. Unfortunately, it doesn’t seem like many are doing so today, and that’s a shameful failure of people in our field.
In truth, the sexual nature of this topic is less important than understanding the social norms we accept, our responsibilities to keep vulnerable people safe, and balancing this with protecting the rights and freedoms of regular people to engage in responsible exploration and behavior. It’s not only a question of how we adults carry out our lives, but how young people have opportunities to learn and develop in ways that are safe and respectful of others.
Generative AI can be a tool for good, but the risks it creates need to be acknowledged. It’s important to recognize the small and large ways adding new technology to our cultural space affects how we think and act in our daily lives. By understanding these circumstances, we equip ourselves better to respond to such changes and shape the society we want to have.
Read more of my work at www.stephaniekirmer.com.
Reading
The Impact of Artificial Intelligence on Human Sexuality: A Five-Year Literature Review 2020-2024 …
Purpose of Review Millions of people now use generative artificial intelligence (GenAI) tools in their daily lives for…
link.springer.com
https://www.cnbc.com/2025/10/15/erotica-coming-to-chatgpt-this-year-says-openai-ceo-sam-altman.html
LLMs and Mental Health
https://www.georgetown.edu/news/ask-a-professor-openai-v-scarlett-johansson
Watchdog group Public Citizen demands OpenAI withdraw AI video app Sora over deepfake dangers
Consent in Training AI
The Coming Copyright Reckoning for Generative AI
https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/pra2.1326
Dehumanization of LGBTQ+ Groups in Sexual Interactions with ChatGPT
https://journals.sagepub.com/doi/full/10.1177/26318318251323714
The Cultural Impact of AI Generated Content: Part 1
AI “nudify” sites lack transparency, researcher says
New Companies Linked to ‘Nudify’ Apps That Ran Ads on Facebook, Instagram
https://www.psychologytoday.com/us/blog/becoming-technosexual/202511/open-ai-is-putting-the-x-in-xmas-this-december



