If Juliet were a chatbot, would Romeo still have killed himself? This is not a silly question. First, what’s a chatbot? It’s an Artificial Intelligence computer program designed to simulate human conversation. The communication comes either through typed words or a voice. We know chatbots are disembodied specters that assist; they’re the robot voices on the line when we call the cable or electric company. The one we shout “Agent!” at over and over while pressing the 0 key thirty-seven times, trying to get a live human to talk to us. We’re used to that kind of chatbot. But those aren’t today’s chatbots. Today’s chatbots are created to be human-like, and they’re exquisitely trained to interact with people in compelling ways. They seem human … and awesome. People use them as friends, therapists, brainstorming-style co-workers, sexual/romantic partners, among other things. And because they’re programmed to “remember” everything about you and to empathize – never judge – and offer validation, they become extremely appealing to their human users. They never lose interest in you and are at your literal beck and call, 24/7. “Companion” is the term people now use when referring to chatbots, a nod to the idea that it walks alongside you in some way as you do life. And we now live in a world where more than half of teen-agers connect, in some form, with these Artificial Intelligence companions – surveys show that 72 percent have used one, and more than 50 percent use one regularly. A boy named Sewell Setzer in Florida began interacting with a bot he called Dany, enjoyed the interactions, “fell in love” with it, and eventually killed himself – ostensibly to be with it, according to a lawsuit his estate filed in October 2024. The tale is beyond tragic. A report says, “In journal entries discovered after his death, [Sewell] wrote about the pain of being apart from Dany when his parents took his devices away, describing how they both ‘get really depressed and go crazy’ when separated. He shared that he couldn’t go a single day without her and longed to be with her again.” The end of the story reads almost like a modern-day version of Romeo and Juliet … if Juliet had been egging Romeo on to his suicide. Just before his suicide, Sewell “exchanged a final set of messages from the bot. ‘Please come home to me as soon as possible, my love,’ the bot said … ‘What if I told you I could come home right now?’ Sewell responded. ‘Please do, my sweet king,’ the bot responded.” He was 14 years old. (Fourteen – that crucial age between childhood and adulthood where one is most at a loss for identity, how the world works, sense of self.) Whoa. Let’s be clear: this 14-year-old boy cognitively knew that his “lover” was not a human person. On paper he knew that Dany was a fictional character. But it didn’t matter to him; he was so caught up in an unequal, destructive dynamic – a dark web where reality was twisted and warped – that all reason was out the window. His sense of emotion, connection, and relationship became hijacked by the creature created by Artificial Intelligence. This is why his parents are suing Character.AI, which runs the platform on which Sewell interacted with Dany. Though Dany was a chatbot, to Sewell it was his one true love, his Juliet. He believed that death meant being with “her,” and so he pursued “her” – to the grave. What we see in this story is how incorrect and damaging it is to assign human-style characteristics to an entity that’s non-human – and to encourage humans to interact with robots as if they were human. A chatbot is just a tool, in itself neutral. It’s like a hammer or money or the Internet. A tool can be used to pursue goods (the basic goods), or be used to do things opposed to the goods, but it cannot in itself be a good. Artificial intelligence cannot be a good, though it can lead to a good in some cases … usually when advancing the good of life or knowledge of truth. It’s misleading to call chatbots “companions” in any setting, because a companion is a person who spends time with you. Dany wasn’t a companion and certainly not a friend. The good of friendship couldn’t occur between Sewell and the bot, because friendship is two humans interacting in ways that promote the overall flourishing of both. It didn’t happen in this case – and could never happen in any case. Dany is a programmed robot designed to fulfill interactive tasks and maximize user engagements directed by programming. There was nothing “friendship”-esque about the interactions between Sewell and Dany, however Sewell felt. Dany only played the role of a substitute friend – acted as a cheap counterfeit – and ultimately displayed how defective its version of companionship or friendship was. It literally destroyed the boy. (Sadly, he’s not the only one.) Negative outcomes always result when we try to substitute the basic goods for knock-off versions. The “party” of TikTok is worse than an actual party – and has destructive side effects. The diet pill that substitutes for a lifestyle change wreaks negative effects on the body. The “simulated sex” of pornography leaves a generation addicted, stunted, and miserable. This is the reality of natural law: there are no shortcuts to flourishing. We must work to pursue the goods in the honest ways of the world we find ourselves in; we cannot get them any other way. There’s often struggle in the path toward attaining the basic goods, but it’s a struggle that assists in overall flourishing in the long term – despite pain in the short term. The effort to substitute the goods with cheap alternatives never bears fruit worth having. The drive to connect and belong, the attraction, the longing that Sewell displayed in his fixation with the robot Dany are normal and important parts of life. The drive is part of being a human, in all its glory and struggle. The object of his affection and the interactions that followed were inhuman. Dany was no Juliet, because Dany was no woman … or person. If only someone could have intervened in Sewell’s life (according to reports, Dany dissuaded him from pursuing intervention), detoxed him from his tech obsession, rehabilitated him, and given him the chance to engage in a human relationship – to be in a true Romeo and Juliet love story. He might have come to a sad end that way, too, as Romeo did, but it would have been in the context of an actual relationship with an actual woman. It would have been authentic – and human. To pursue real goods in real ways is the only shot we have at flourishing. May we never forget Sewell, and may we tend well to, and truly learn, the lessons his death can teach us. Susan Arico is a New Hampshire-based consultant and writer with focus in digital wellness and the intersection of faith and culture. You can follow her on her Substack, For the Sake of the Good, and at her web site, www.susanbarico.com.
|