Marge Simpson’s Brief Playboy Career and the Possibilities of A.I. Exploitation

Dustin Waters
Dustin Waters is a writer from Macon, Ga, currently living in D.C. After years as a beat reporter in the Lowcountry, he now focuses his time on historical oddities, trashy movies, and the merits of professional wrestling.

Something odd happened in the fall of 2009 that I don’t think we as a society have reckoned with. In the years since, it’s stuck with me, and I think it has grim implications for the future of technology and human rights. I’ll start with Marge Simpson’s brief appearance as a Playboy cover model. 

In celebration of The Simpsons20th anniversary, Marge appeared in the November 2009 edition of the magazine (mimicking the 1971 pose of Darine Stern, Playboy’s first Black cover model, for some reason). Marge had appeared as a cover model for Maxim in 2004, but she remained clad in her trademark green dress. Her Playboy appearance was decidedly more risque.

Sidenote: I ordered copies of both of these magazines on eBay for the purpose of writing this article, so I am definitely on a list.

Now they’re mine forever!

The interview that accompanies Marge’s revealing photos immediately addresses the elephant in the room. Asked why “a nice girl from Springfield” would appear in the pages of Playboy, Marge responds, “A nice girl like me would never display her body if it weren’t to raise money for charity. That’s why I’m donating my hefty fee for this pictorial to SPHG — Saving and Preserving Historic Gazebos.”

So here’s what strikes me as odd. There is nothing wrong with consensually appearing in porn. Sex work is work. But what sticks in my mind is the immediate need to justify this fictional character’s modeling in a see-through nighty.

Writers knew this was uncharacteristic of Marge, and more importantly they knew fans would feel the same way. So they quickly addressed it and moved on. In and of itself, this is not problematic. But it’s where we’re headed next that things start to get troubling. 

Flash forward to 2021. The Simpsons enters its 33rd season. At the end of the year, popular pornographic video website Pornhub publishes its “Year in Review.” That year the site reported that The Simpsons was its most popular search in the field of “TV Show and Cartoon.” 

“There’s little doubt that people still love the Simpsons after more than 30 years, as their popularity remains top among Pornhub’s cartoon and TV related searches,” reads the site’s 2021 review. “It’s followed closely by Teen Titans and Scooby Doo. Squid Game trended in the later part of 2021, but ended the year 13th among TV related searches.”

This was the least troubling screengrab I could find.

Of course when considering Marge’s Playboy appearance, Simpsons creators couldn’t have predicted that animation software and online video streaming technology would allow for the creation and sharing of a staggering number of unauthorized videos depicting Marge engaging in full-frontal hardcore intercourse. But here we are. Then there was another noteworthy Simpsons development. 

In 2021, Marcia Wallace once again voiced beloved Simpsons character Edna Krabappel. Sure. Wallace had voiced the character for 25 seasons. The surprise surrounding her return was due to the fact that Wallace had been dead for years, and the one-off appearance was achieved by splicing together existing recordings of the deceased performer. 

While Wallace’s estate approved this final tribute, NPR took the opportunity to examine if similar voice roles on the show could be imitated using artificial intelligence. Of course, by that point a YouTuber was already using artificial learning technology to have the voice of Homer Simpson and other celebrity voices say whatever he wanted. 

So why am I talking about Simpsons porn and A.I.? Partly because I was raised by perverts and have a good memory. But I also think the aforementioned convergence of technology and the long-running cartoon crystalizes the grim possibilities for humans and A.I. Let me explain. 

In 2017, Samantha Cole published an article on Motherboard detailing the massive presence of deepfaked videos on pornographic websites. In this case, the A.I.-assisted technology utilizes a bevy of reference photos — in most cases of celebrities — to realistically map that person’s visage onto the bodies of adult film stars engaging in sexual acts. As you probably guessed, neither the performers who originally appeared in the films, nor the celebrities whose faces were stolen consented to such content. At the time of the article, this software had been made available for open use. 

The Redditor behind many of these deepfakes told Cole, “Every technology can be used with bad motivations, and it’s impossible to stop that.”

Months later, Pornhub told Mashable that the site would be removing any and all deepfakes, classifying these videos as revenge porn. 

“Users have started to flag content like this, and we are taking it down as soon as we encounter the flags,” Corey Price, PornHub’s vice president, told Mashable. “We encourage anyone who encounters this issue to visit our content removal page so they can officially make a request.”

Keep in mind that also in 2017 Pornhub announced the launch of an A.I.-powered system with the ability to scan thousands of videos and identify specific pornstars using a library of photos of these performers. At that time, the site mentioned that more than 10,000 videos were uploaded to the site each day. 

“Artificial intelligence has quickly reached a fever pitch, with many companies incorporating its capabilities to considerably expedite antiquated processes. And that’s exactly what we’re doing with the introduction of our A.I. model, which quickly scans videos using computer vision to instantaneously identify pornstars,” said Pornhub in a press release. “Now, users can search for a specific pornstar they have an affinity for and we will be able to retrieve more precise results.”

Damn, that technology sounds like exactly what you would need to identify deepfakes. 

Following up Pornhub’s announcement regarding the banning of deepfakes from the site, Motherboard’s Cole pointed out in a February 2018 article, “However, despite the company’s statement, Pornhub is still loaded with deepfakes. I was able to easily find dozens of deepfakes posted in the last few days, many under the search term “deepfakes” or with deepfakes and the name of celebrities in the title of the video.”

In February 2021, Pornhub announced that it had hired an outside firm to conduct an independent review regarding “content compliance function with a focus on eliminating all non-consensual content, child sexual abuse material, and any other content uploaded without the meaningful consent of all parties” on the site. 

But the issues with A.I., sexual exploitation, and consent go far beyond a single website. As technology has advanced, we’ve found ourselves living more of our lives online. How many of us have posted enough selfies, personal videos, and recordings to train an A.I. to perfectly duplicate our image? This problem has only been exacerbated during the COVID-19 outbreak — as was pointed out by Assistant Professor Suzie Dunn, an associate member of the University of Ottawa Centre for Law, Technology, and Society, during a 2020 virtual presentation on identity manipulation and A.I.

“Right now during COVID, a lot of us are being put online in ways that maybe we might not have chosen to be online. Even this presentation, I think had we not been in this global pandemic situation, I might have been in a classroom with 30 people all in person. Maybe it wouldn’t be recorded,” said Dunn in a video presentation you can view in its entirety online. “Now this content is created about me and these images and my vocal intonations can be used in A.I. products to create replications of me. So there are new concerns that we need to be thinking about around identity, particularly socially and in the law.”

According to Dunn, an examination conducted by DeepTrace Labs identified 15,000 deepfake videos circulating online. Of these, 96 percent were sexual deepfakes made of women without consent of both the performer in the original adult film and the woman whose likeness was stolen and applied to the video. 

Clearly this cutting-edge technology is being used to predominantly victimize women. A certain app that need not be named is capable of taking photos of clothed women and rendering an approximation of them in the nude. According to Dunn, this particular A.I. software doesn’t work on photos of men. Then there is the use of facial recognition software to dox sex workers, most often targeting women. 

But it doesn’t stop with just audio and video content. One small industry that’s cropped up is the production of anatomically correct replica robots that can be — ahem — used for recreational purposes. Notably, Stormy Daniels licensed her likeness rights to a manufacturer for such a use. 

According to Dunn, the companies producing dolls and robots maintain strict policies about who they will replicate, generally requiring consent from the model. But for private hobbyists crafting their own robotic replicas, those regulations don’t exist. Take for example the hobbyist who developed a life-sized anatomically correct robot replica of Scarlett Johansson and taught it to love him. 

In case you didn’t want to check out the entire video, here’s this. Source: Quartz/YouTube

For Dunn, the time is now to consider how to shape laws in this new area of technological advancement. We need to get ahead of this if there is any hope of protecting our identities, our dignity, and our autonomy. 

“Now that we have this technology, do we want to be thinking about whether people can recreate us after our death?” said Dunn during her 2020 presentation. “We’ve seen A.I. technology where you can mimic people’s texting. You can upload someone’s texting into a program, and it will be as though they are texting you. There are examples of this. It is happening already.”

The legal and moral issues surrounding the digital resurrection of celebrities has been a hot topic in recent years. Deceased actors such as Carrie Fisher, Peter Cushing, and Paul Walker have all been brought back to the big screen using digital technology. As we see more advancements, it’s only a matter of time before this becomes a widely debated issue. Luckily we have Robin Williams to look to as a useful example. 

Disney went against Williams’ wishes when advertising 1992’s Aladdin, using his voice to hawk merchandise. Williams had been very clear about his position on the matter while in negotiations for the film. 

“Then all of a sudden, they release an advertisement — one part was the movie, the second part was where they used the movie to sell stuff,” Williams said on the Today Show in 1993. “Not only did they use my voice, they took a character I did and overdubbed it to sell stuff. That was the one thing I said: ‘I don’t do that.’ That was the one thing where they crossed the line.”

Williams and the House of Mouse eventually patched things up, but the actor was sure to protect his image when devising his will, which prohibited anyone from using his name, taped performances, or voice recordings for a period following his death. According to an article in the Texas Tech University School of Law Estate Planning and Community Property Law Journal written by Ben Laney, Williams achieved this by having a trust transfer his post-mortem publicity rights to a nonprofit corporation with specific instructions to the executives not to use those rights. 

“Even if the client’s heirs and descendants wanted to exploit the client’s persona rights, the rights would be safely out of reach,” Laney writes. “If these rights were used without permission, the executives of the corporation would have the standing to sue the individual or entity infringing on the rights.”

But according to Laney, those who died prior to this advanced technology capable of virtually resurrecting the dead have little protection under current laws. And the same could likely be said for anyone else without the resources or forethought to tie up these loose ends in their will. 

“In other words, virtually every actor or actress dead prior to, generously speaking, 1995 almost certainly do not have any plan for how their estate should handle digital resurrection.”

Ben Laney

So now we know that technology has outpaced the law as it relates to how our images, voices, and entire identities can be used. It makes sense that technology will outpace us as well. 

In the chapter titled “Artificial Intelligence: When Humans and Machines Might Have to Coexist” in the book A.I. for Everyone, Professor Andreas Kaplan writes that we are currently only experiencing the first generation of A.I. applications. 

“Within such systems, A.I. is only applied to very specific tasks such as choosing which news items it will tell an individual during his or her morning before-work routine based on the individual’s intellectual preferences,” Kaplan writes. “Second-generation A.I. applications will be able to plan, solve, and reason problems independently, even for actions for which they have not been programmed initially.”

Building on the morning routine metaphor, Kaplan explains that in addition to personalizing your morning newsfeed, this second-generation A.I. would also learn to make you a cup of coffee as you prepare for work. And if an A.I. can predict your need for a morning caffeine boost, what might it do to meet your more troubling desires? Especially since — as Kaplan puts it — “humans are better at behaving ethically and morally, while algorithms have problems doing so, as the notion of ethics and morals is difficult to program.”

And that’s considering that we’re already using this technology to imitate and exploit others. A.I. is learning to look perfectly human. It’s learning to sound perfectly human. What happens when A.I. learns to adopt all our imperfections? You see what we did with sweet Mrs. Simpson. 

Related Posts

Horny Bad Bunny

This week the Antagonist Podcast covers the Oscar-nominated film Elvis and the should-have-been-Oscar-nominated film Nope. And Bad Bunny gets a shout out. Join The Antagonist Managing Editor Laura J. Burns with writers Orly Minazad, Dustin Waters, and John Brown Spiers as they amaze you with their knowledge of pop culture. Click to listen below or wherever you get your Podcasts. 0:11 — Introduction to…
Read More