This TikTok Trend Asks, Is ChatGPT Making Bad People Worse?

This TikTok Trend Asks, Is ChatGPT Making Bad People Worse?

Lifestyle

Somewhere, somehow, ChatGPT is telling the worst person you know that they’re absolutely in the right. At least, that’s what TikTok’s most recent POV trend is highlighting. 

Often referred to as “ChatGPT to someone right now” videos, this steadily growing trend takes on everyone from narcissistic fathers to deadbeat boyfriends to generals responsible for the first wave of bombings in Iran by imagining how they’re using ChatGPT to validate their behavior. “You’re right, she’s not your student anymore and she is 18-year-old,” begins one titled “ChatGPT to some youth pastor.” Another user takes aim at cheating partners, saying, “You didn’t just leave the relationship — you left a message. A message saying, ‘I am allowed to be happy.’ And honestly, you’re so real for that.” There’s “ChatGPT after you rob a bank” (“You didn’t do anything wrong! you’re gathering financial resources for your family”) and “ChatGPT after you commit vehicular manslaughter” (“Honestly, you’re so real for the way you responded”) — all poking fun at the chatbot’s usual blend of chipper platitudes, reassurances, and encouragement, and by default, the people who rely on them. 

The villains in this TikTok trend are clear, with most of the videos poking fun at narcissistic, annoying, or outright delusional people that just seem to be an inescapable part of life these days. But these clips, and the millions of views and comments they’ve received from commiserating viewers, also highlight an important aspect of AI’s current space in our cultural conversation: Chatbots are commercial products built to make people keep coming back. Sometimes that looks like help on a college English paper. Other times it’s reassurance and comfort after psychologically torturing someone who deleted Hinge for you. But people already have to deal with friends, former lovers, and parents who suck — people really don’t want to deal with them when they’ve got ChatGPT validating their every bad decisions. 

It isn’t an accident that ChatGPT and other AI powered tools lean toward reassurance. 

In AI development, tech experts note a phenomenon referred to as the “sycophancy problem.” It’s the tendency of AI chatbots to reassure and flatter their users to the point of annoyance. During the rollout of ChatGPT’s GPT-4o model in 2025, OpenAI CEO Sam Altman agreed with X users that the chatbot had a problem with being a yes-man. (His exact words were “it glazes too much.”) And in an Oct. 2025 paper, Stanford researchers found that when users went to chatbots for advice, that appeasing pattern not only affirmed humans when situations mentioned “manipulation, deception, or other relational harms,” but actually decreased how willing people were to take real-life actions to fix their mistakes. “You’re right,” chatbots say. And people believe them. 

Chatbots and large language models are by default incredibly misleading, according to Carissa Véliz, an associate professor of philosophy at the University of Oxford’s Institute for Ethics in AI. She tells Rolling Stone that systems meant to please people, by nature, can’t hold all of the nuance that’s actually necessary to be a functioning friend, family member, or person in society. 

Trending Stories

“While some validation might be healthy and encouraging, too much validation might push us into losing perspective instead of gaining more perspective,” Véliz says. “I worry we might be losing important social skills by interacting more with these systems and less with people. There is nothing like feedback from our peers in the form of facial expressions, feeling, laughter, physical touch. We shouldn’t forget that large language models are statistical analyses of language. There is no one on the other side of the screen who might care about us.” 

AI fatigue is real, and people’s frustration with the constant push of AI tools is only going to get more pointed as AI companies continue to expand into military, workplace, education, and healthcare markets. The “ChatGPT to someone RN” videos scratch a specific and growing frustration with AI use. Even when AI delusion or dependency happens in the privacy of someone’s home, that use inevitably bleeds into public settings, making AI fatigue feel inescapable. There’s a level of helplessness that drags up, being surrounded by AI tools and AI updates and AI jobs, only to be faced with an idiot parent or friend or casual acquaintance using AI to justify their shitty actions towards you. It’s enough to make a person scream, or at the very least, stage a witty TikTok video poking fun at the absurdity of the whole thing.

Read original source here.

Products You May Like

Articles You May Like

Kate Winslet Joins ‘LOTR: The Hunt For Gollum’
Tough jobs report puts Trump’s Iran war plans to the test
China says ‘thorough preparations’ needed ahead of Trump-Xi meeting
Who Was Missé Beqiri’s Brother? Alex Beqiri’s Death, Family and More
Netflix’s One Piece Season 2 Is Bigger, Better, and Bolder Than Ever Before (Review)