By Harshini Kesavan
Recently, I sat through a friend’s five stages of grief after her Snap was left on opened (for 13 hours) by a boy she liked. Tragic, I know. Ironically, it wasn’t me, the friend who was there for her, who finally brought her to the final stage of acceptance—it was ChatGPT.
Dramatically resorting to asking ChatGPT how to reconcile her heartbreak, my friend received an assuring, compassionate, and immensely emphatic speech comforting her, letting her know she was stronger and more deserving than any validation some teenage boy could give her. (All things that I said, but I guess ChatGPT is just the better friend.) ChatGPT’s message made her feel better, but then going above and beyond—one-upping me immutably—it wrote my friend a mantra, galvanizing her to remember who she is. Beginning with “my heart may ache, but I still rise,” the AI’s words were ridiculous enough to pull my friend out of her misery and give us a good laugh. All’s well that ends well, right?
But later that night, I had to think: shedding the awful poetry, is ChatGPT actually changing the way people process their feelings? Is it becoming a stand-in for therapy, or worse, for a friend? For those who have been in Mr. Hadley’s English Honors II class, you might recall the 2020 Netflix documentary The Social Dilemma he had us watch. One of its central messages was when a platform is free, you are the product. Essentially, The Social Dilemma revealed that the goal of social media platforms is to keep your attention using any means possible, then sell the data they collect to advertisers for profit. Applying that “attention economy” logic to ChatGPT, the goal of its model—hypothetically—should be to please the user so that they continue to interact with the platform. And that’s exactly the problem with using ChatGPT for therapy, or really any personal matter; prioritizing winning your loyalty over anything else, ChatGPT is guilty of relentlessly appealing to our existing biases, telling what we want to hear over what may be rational or genuinely helpful.
Apart from not offering human connection, ChatGPT may be one of the most dangerous “friends” to lean on for its inherent insincerity. As it is programmed to hold onto users’ loyalty, ChatGPT will not (and physically cannot) put its foot down, telling you you’re wrong or to suck it up like a true, human friend may. For fear of losing users’ trust or fondness, it won’t tell people like my dear friend that they are being dramatic. Instead—and glaringly apparent once you look closer—ChatGPT will parrot your own words back to you, validating your thinking and never straying too far from the bias you exhibit in the way you’ve phrased your question or prompt.
As unserious as our experience was on the night of my friend’s small-scale heartbreak, I believe that in the long-run, using ChatGPT to process feelings could be destructive to individuals and alienating on a societal scale. Always looking to capitalize off of users’ tendencies to search for confirmation bias in answers, ChatGPT is not your friend and certainly not the shoulder to cry on when it’s time you face your problems—small misgivings or monumental heartbreaks—for what they truly are.
