We Don't Have to Live With AI-Generated Porn
It's obscenity all the way down
This post features some discussion of child pornography/child sex abuse material. If you’d prefer not to read about that, don’t read this.
I had a new piece over at The Dispatch on Monday about the law and ethics of AI-generated child pornography/child sex abuse material (CSAM). That piece was prompted by a tête-à-tête on X between me and the usual “just asking questions” defenders of the manufacture of artificial CSAM.
I encourage you to read the whole piece, but the thrust is that AI CSAM currently exists in a legal gray area thanks to a 2002 Supreme Court ruling. In that case, the Court ruled that AI CSAM was, like most pornographic images, speech; that it was neither real CSAM (which is unprotected speech under Ferber) nor always obscenity (which is also unprotected under Miller); and that consequently, it couldn’t be proscribed simply as AI-CSAM, but only if a given instance falls into one of those two categories. I argue that this reflects our cultural inability to think about kinds of speech which are wrong in themselves, and our unwillingness to say that such speech should be proscribable. And I argue that Ashcroft v. Free Speech Coalition should therefore be overturned.
Since that piece came out, Sam Altman has announced that soon, ChatGPT and other OpenAI products will permit “erotica for verified adults.” Obviously, I do not think that OpenAI will start distributing Ashcroft-approved AI CSAM to anyone who puts in the right query.1 But I do think this opens up a rather larger question about the problem of AI-enabled obscenity and its socially disruptive effects.
After all, my argument in the Dispatch is not actually, to use a technical term, correct. (I would prefer to say that I oversimplified for purposes of getting the point across). To understand what I mean, you should know that under federal law, there are actually two separate criminalizations that could apply to AI-CSAM while surviving judicial review. One, 18 USC 2252A, prohibits AI-CSAM images which are “morphed,” i.e. which depict real children whose images have been altered to appear lewd, or which are produced from training data that contain real CSAM.
The other is 18 USC 1466A, which proscribes the production and distribution of “visual depictions” of children engaged in “sexually explicit conduct” which is also “obscene.” To the extent that there is any AI-CSAM not covered by either of these statutes (read: protected by the First Amendment), that space is defined largely by what, exactly, constitutes an “obscene” depiction of children engaging in “sexually explicit conduct.” That is to say: to the extent that AI-CSAM can’t be proscribed, it’s entirely because juries or courts are unwilling to call it obscene—the extent of obscenity is the extent of the prohibition.
This is, I never tire of observing when writing about this topic, obviously intrinsically absurd. Any depiction of children engaging in sexually explicit conduct is obscene in the conventional sense of the term—how could it not be? Yet several appeals courts have insisted that it’s possible, and found instances in which it exists. As I argued in the Dispatch piece, AI-CSAM can be optimized to fit into this niche; generating images just shy of obscene is a task at which AI will excel.
The reason the courts think non-obscene AI-CSAM exists is that obscenity has a technical legal definition under Miller v. California. For speech to be obscene (and therefore not receive the protection of the First Amendment), it must meet the following three requirements:
the average person applying contemporary community standards would find the work, taken as a whole, appeals to the prurient interest;
the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and
the work, taken as a whole, lacks serious literary, artistic, political or scientific value.
It’s conceivable that AI-CSAM may not be “patently offensive,” or that a jury may be persuaded that it has “serious … value.” That’s how you get to protected AI-CSAM.
If this definition seems extremely narrow, you are correct—it is deliberately so. The Miller standard is maximally speech-protective, both for principled reasons—speech is a right, and rights should be secured—and for contingent ones—a lot of the fight about obscenity in the ‘50s–‘70s was about overly censorious communities trying to proscribe obviously unobjectionable content. It is not, as I detail in that Dispatch piece and elsewhere on this Substack, consistent with what the First Amendment meant prior to the 1960s and 1970s.
And, as I have argued (again, at the Dispatch), the high standard for obscenity combined with the lax enforcement of obscenity laws (still very much on the books, thank you!) has left the state totally powerless to check the rise of hardcore pornography. As I’ve written previously here at TCF, the unregulated market in porn takes a predictable course, with more and more hardcore content being delivered in more and more direct ways. Much as in the case of incitement, we have removed the government’s long-standing ability to deal with a well-established social problem that happens to involve speech; as a result, that problem has become much more prevalent.
With incitement, of course, the problem is narrowness of definition. That’s a problem with obscenity, too, but it’s not the only problem. The problem is really one of deference—what the Miller court did was try to take away from the local community (embodied in the jury) the ability to judge for itself when something is and is not obscene, and replace that judgment with a strict standard that a jury was to algorithmically apply.
There is, of course, a lot of merit to this approach to the criminal law generally. But in the case of obscenity, it is impossible, because obscenity is not an objective thing. It is, rather, a severe departure from the community’s standards about what ought and ought not to be said—or depicted—about sex. Those norms fluctuate, and have always done so (there’s a lot of very dumb legal writing on this topic, as though observing norms change over time is some earth-shattering insight). The proscription of obscenity acknowledges, however, that all communities have an interest in regulating sex and the representation thereof—that what happens in the bedroom does not always stay in the bedroom, particularly not in the age when depictions of someone’s bedroom are in everyone’s pocket.
Which brings me back to OpenAI and the dissemination of AI obscenity. Because that’s what Altman is promising, of course: the manufacture of obscene materials at scale. I am sure that his models will be studiously tuned to avoid the legal definition of obscenity.2 (As I noted in the Dispatch, even Pornhub’s terms of service prohibit posting obscenity!) But in the colloquial sense of obscenity, that is what OpenAI will be facilitating.
I think this will be extraordinarily socially deleterious. I think it will turbocharge the constitutionally thorny3 question of revenge porn — if distributing your ex’s dirty photos might be protected speech, morphing him or her into dirty positions definitely is. I think it will raise further the opportunity cost on dating and mating already imposed by pornography. I think it will contribute further to the breakdown of relations between the sexes that the rise of widely available hardcore pornography has obviously abetted.
And most importantly, I think we have the tools we need to stop it. We have the tools we need to stop AI-CSAM, of course: just define it all as obscene, categorically, and move on to the next question. But we have the power, too, to say that it is wrong for OpenAI to serve a smorgasbord of infinitely customizable pornography to anyone who types in the right terms. We can subject that content to our collective judgments, through the power of the law. And OpenAI—a big company that would like to get even bigger—would studiously respect that power, if we chose to exercise it.
But we don’t, so it won’t. Nothing about this is inevitable: we just have to be willing to say that we know it when we see it, and we don’t like what we see one bit.
Although someone eventually will.
This is easier said than done, imo, and I think GPT permitted to do erotic things will inevitably cross the line.
Dirty images of an ex are speech as much as any other porn is; if the content isn’t Miller obscene, and if no misrepresentation is involved to create defamation liability, how can its distribution be prohibited?

