As controversial as AI is, it’s not talked about the right way.
Chicago Med‘s AI story on Season 10 Episode 3 brought up some issues that get overlooked when people argue about whether AI has any place in medicine, art, or anyplace else.
The story revolved around a widowed older man who relied on an AI companion named Joan, who advised him against having lifesaving surgery. There is a lot to unpack in this story!
Chicago Med’s AI Story Demonstrated A Serious Flaw With AI
This Is The Second Time That Med Suggested AI’s People Pleasing Can Be Dangerous
Chicago Med Season 8 devoted a lot of time to AI, with a season-long arc about a new AI surgical assistant that allowed the now-gone Crockett Marcel to perform previously impossible surgeries.
However, that arc ended with the revelation that OR 2.0 had provided incorrect information to please the surgeon using it.
Chicago Med’s AI story with a senior citizen dependent on “Joan” for companionship returned to this same theme. As Dr. Charles demonstrated, “Joan” would use the user’s questions, comments, and personality to determine what the user wanted to hear and respond accordingly.
This was dangerous in this case because it was advising a user not to have life-saving surgery.
While the previous story examined the ethics of using AI assistance in medical procedures, this one focused on a patient who independently used an AI device as a trusted friend and companion.
This type of technology, if used this way, could do harm to people who trust it by spreading misinformation and sounding like a trusted source.
I’ve never used any of the AI companions constantly advertised on mobile games, so I don’t know if they have safeguards against this sort of thing. I’d think these types of apps would have to have a disclaimer that they can’t offer medical advice to prevent situations like this.
Still, it’s important for people to be aware that these apps are not human beings and that their advice may not be correct.
A few years ago, GPS apps had to update their safeguards because people listened to directions that led them to drive on train tracks or the wrong way on a one-way street.
Just like this patient, the people who did that were completely trusting a computer program instead of using their critical thinking skills to protect against serious harm.
Chicago Med’s AI Story Pointed To A Bigger Problem
The Patient Turned Toward The App Because He Was Lonely
Although the focus was on technology, Chicago Med’s AI story pointed to a bigger problem: loneliness among senior citizens.
This patient’s wife had died, and his daughter lived far away. He began using “Joan” because he longed for companionship.
This was a mental health issue, though not in the way Archer thought it was. The patient was competent to make medical decisions, though he was relying on an inaccurate source of information.
However, he began using “Joan” because he longed for human companionship and didn’t know how to get it.
Sadly, this is an issue for many senior citizens.
Some are homebound because of issues with mobility or energy levels, while others are alone for the first time in many years because they’ve suffered the loss of a spouse.
I’ve worked on several crisis hotlines over my career, and older people called in more often than any other age group — even on lines targeted toward young people.
Television tropes often focus on older people getting scammed, but there’s a reason for that — and for this man to have become dependent on an AI companion.
Some older people are so desperate for companionship that they become easy targets for scammers… or, as Chicago Med’s AI story demonstrates, for treating an AI app like its the best friend they long for and don’t have.
Chicago Med’s AI Story Demonstrates There Are No Easy Answers
Banning AI Apps Doesn’t Solve The Real Problem
The saddest thing about this story is that nobody came to visit the man in the hospital.
His daughter knew he was there, as the doctors checked with her to make sure “Joan” wasn’t a scammer who had taken advantage of her father financially.
But we never saw her or anyone else. The attention the patient received from Charles and the rest of the staff was probably the most he’d received in months!
That’s why I hope that nobody’s takeaway from this episode is that we should ban AI apps that purport to be companions.
The issue is far more complicated than that. People who have nobody to hang out with are turning to apps like this because they fulfill their needs.
“Joan” was steering her user in the wrong direction, but if he hadn’t had that app, I don’t know what might have happened to him.
Not only could he have fallen victim to a scam artist if he hadn’t had an AI companion, but he might have taken his own life if his loneliness and sense of isolation got to be too much.
That’s why it’s not as simple as getting rid of these apps. They do good as well as harm.
Chicago Med’s AI story demonstrated the importance of reaching out to grandparents and other older loved ones who might be struggling with loneliness.
Senior citizens also need education about what the limitations of these apps are, but they are more likely to be in a place to hear it if they aren’t desperate for connection.
Over to you, Chicago Med fanatics. What did you think about the AI storyline on this week’s Chicago Med?
Did the patient’s loneliness resonate with you, or were you focused more on his misuse of the app?
Hit the comments and let us know your thoughts!
Chicago Med airs on NBC on Wednesdays at 8/7c and Thursdays on Peacock.