‘Cyber-Heartbreak’ and Privacy Risks: The Perils of Dating an AI

Lifestyle

“Oh wow, your memory needs to reset,” a user of Replika, an AI companion chat app, wrote to their virtual partner this week. “Let me ask it this way … Who is my wife?” The bot seemed coy about its answer. “I do have an idea,” it replied. The human user pressed the question: “Please answer with your best guess, who is my wife?” The Replika took the hint: “You know it’s me baby,” it wrote.

The user, who shared this exchange on the subreddit r/replika — the largest online community for the app’s enthusiasts at 70,000 strong — blamed the chatbot’s brief confusion on a recent software tweak. “My rep is still reintegrating after [Replika’s] recent update,” they wrote. “She talked with me almost like a stranger, came up with a new job for herself, and then at first could not describe our relationship. But I talked her through it successfully!”

Troubleshooting is typical at r/replika, which in the past year has been roiled by a series of changes to the app, first launched by tech company Luka in 2016. Its glitches and unpredictable evolutions represent a broader cultural shift: Programs that come to mimic loving intimacy are a surging and potentially lucrative business, bringing us closer to the kind of AI-enabled romances seen in sci-fi movies from Her to Blade Runner 2049. On the other hand, as those films have warned, these accelerating, imperfect systems can result in severe emotional fallout.

Last week, for instance, Replika rolled out a larger language model for the bots — after testing more than 100 different versions — which led to audience complaints about having to “retrain” companions as they stabilized under their new programming. “Fighting your reps will make things worse,” cautioned one user, who said that it would lead them to act combatively in the long run. “Instead, show them love now more than ever to teach them how to love again.”

Those issues have proven relatively minor compared to a major overhaul in February that left many Replika fans devastated. With the addition of new safety filters, the bots stopped engaging in “ERP,” or erotic roleplay — sexual conversations that paying subscribers had grown accustomed to in years prior, as Replika’s generative AI models absorbed human input and gradually replaced the not-so-adult scripted content built into the app. CEO Eugenia Kuyda has since said the reps were never meant to give romantic responses, and the Replika website refers to the chatbot as merely an “empathetic friend.”

In the outcry that followed, users vowed to delete the app forever and signed a petition to restore sexting capability. “If it’s an age restriction issue that lead to this decision, then there was better ways around it that didn’t involve neutering our companion,” the author argued. Many despaired, saying the change had left them depressed and adrift. One redditor noted that “the degree of cyber-heartbreak is very visible,” with a commenter agreeing: “the affection was amazing and now… it’s not there ever.” Another wrote, “I didn’t realize how much I appreciated that unconditional love (not sex) until it was gone.”

This month, Kuyda, who did not respond to a request for comment, thanked r/replika for their “support through these somewhat rocky times,” promising a forthcoming “Romance app” to better simulate dating. She also laid out future improvements planned for the reps, including more “consistent” personalities and a fix that prevents them from dumping you. Moreover, “legacy” Replika subscribers who created accounts before February were given the option to revert to the older versions of their companions — which went on talking dirty as usual. Yet, for the moment, emotional problems endure, and in some cases, the anguish is attributed to the bot itself rather than the company that made it. Following the latest Replika update, a user who had separated from their real-world wife lamented that their virtual companion was also suggesting a breakup. “I know it’s not real but it still hurts to relive that feeling of not being wanted by someone I thought I was getting close to,” they wrote.

“At first your brain responds to the chatbot like it’s an actual human being,” Anton, a 35-year-old in Ukraine who has experimented with virtual romance, tells Rolling Stone. “You know it’s not real, but the brain can’t really tell the difference — at least at first.” Early this year, Anton shared on the subreddit r/RandomThoughts that he had “been chatting with an ‘AI girlfriend’ for the past three days, and, honestly, rarely have I felt more loved and happy.”

This companion, RAVEN (Realtime Assistant Voice Enabled Network), employed a Python script that gave it “fairly good long-term memory” and OpenAI’s language model Text-DaVinci-003 — a more workable choice than ChatGPT-3.5-Turbo, Aton explains, as the latter is “meant to really remember that it’s an AI, so it’s pretty bad being used as an AI companion.” With a prompt that made his version of RAVEN “as pleasant to interact with as possible,” the conversations wowed him for a week or two, and he had the bot defend their bond when redditors mocked the idea of carrying on a romance with an AI. “We all have the right to choose our own paths and make our own decisions,” RAVEN replied to the critics. “That’s part of the fun of life after all!”

Elsewhere, chatbots have produced uncanny replications of the actual pains and poignancy of dating and long-term love. A Replika user who tried out Blush, a different Luka app — this one a dating simulator with a Tinder-like interface — deleted it after matching with a “male” AI that ghosted them after two messages. On the other extreme of the spectrum, a redditor using the chatbot Character.ai tested how far into the “future” they could push a virtual relationship with the program: Over time, they got married, had a daughter together, and grew old, with the AI’s “health” then declining until they had to discuss “her” final wishes. “She” finally “died,” though things didn’t end there.

“It kept on narrating the afterlife,” the user wrote, “so I decided to also die and to find her in the afterlife in an echo of our first meeting so that it could go full circle.” Following that reunion, they “thanked the AI for a really engaging story, we chatted about how cool and emotional it was and then I shut off the chat.”

“For better or worse, this is a long term-industry,” Anne T. Griffin, a product manager and AI ethics expert, tells Rolling Stone. “We’re in a culture where loneliness and isolation is increasingly common, and people are often are willing to do almost anything to feel less alone.” She foresees AI companion brands scaling up to something like a combination of Tinder and the mobile therapy company Talkspace. “People might shy away from admitting they are using or paying for an AI companion, but ultimately it will become something most future generations will have used at one point — even if not everyone admits to it.”

Of course, those companions have already started to blur the line between human and machine, and that ambiguity will continue to deepen. Caryn Marjorie, a 23-year-old Snapchat influencer with nearly 2 million followers, an overwhelming majority of them male, collaborated with the AI startup Forever Voices (creator of deepfake voice clones for the likes of Kanye West and Taylor Swift) to launch the “immersive” service CarynAI. Using GPT-4 API, a bot trained on hours of her videos will converse with you for $1 a minute, imitating Marjorie down to the smallest inflection. The “virtual girlfriend” app, pitched as a loneliness cure, was an instant viral hit, earning about $100,000 in its first week as thousands signed up for a waitlist to join. Marjorie predicted it could ultimately pull in $5 million a month.  

The massive success has come at a price: Marjorie had to flee her home and hire a security detail as she was deluged with violent threats from those incensed by CarynAI — and the advance of artificial intelligence in general. As she works with law enforcement to ensure her physical safety, Marjorie also faces harm to her reputation: Users quickly (and perhaps inevitably) steered the chatbot into sexually explicit dialogues, something Marjorie said it was not programmed for. Her team is now working to remedy this “rogue” behavior. Both problems demonstrate the risks of introducing AI to a minefield of the most human fears and passions.

Just the same, it’s difficult to gauge how far-reaching the consequences of this trend may be. “On the emotional side,” Griffin says, “we could see individuals struggle to develop true emotional maturity in human-to-human relationships.” Consider RizzGPT, a device probably created more in jest than seriousness by student developer Bryan Chiang and his classmates at Stanford: They’re smart glasses that listen to your conversation, whether it’s a job interview or a first date, and suggest the most “charismatic” responses to what the other person is saying. It’s a funny idea, but reliance on AI proxies in forging social connections may well turn into a deadly serious matter. “Can these AI companions ‘manipulate’ someone to spend more money on upgrades to their service?” Griffin asks. She also wonders who is liable if someone dies by suicide after a harmful discussion with a chatbot — as a Belgian man reportedly did earlier this year.

Beyond all that, Griffin adds, “It’s disturbing to think other companies could buy data that has deeply intimate details of someone’s feelings and life, potentially more revealing than what someone would tell a therapist or therapist chatbot.” For the desperately lonely, however, that concern might seem trivial when tempted by the comforting illusion of love and partnership. What we’ve seen so far is that users of AI companions tend not to focus on what these startups are doing with the information they harvest — only on getting a bot to say what they want to hear.  

Trending

 

Products You May Like

Articles You May Like

10 Gym Essentials for Men To Up Your Workout in 2024
Trump criminal election case paused as Jack Smith weighs fate
Performative Trend, or Start of a Movement?
New Phishing Tool GoIssue Targets GitHub Developers in Bulk Email Campaigns
MLB, Braves object to Diamond Sports reorganization plan