Last May, actress Scarlett Johansson released a blistering statement about the release of a new voice for OpenAI‘s ChatGPT 4.0 bot, called “Sky.” To many — including her “closest friends,” Johansson said — the voice was indistinguishable from her own. That would have been unsettling enough, but it so happened that the tech giant had approached her months before about supplying voicing “Sky” herself, hoping to leverage her celebrity in part to ease the friction between Hollywood and the AI industry. She had declined, she explained in her statement.
So, Johansson said, she was “shocked, angered, and in disbelief” upon hearing the demo, which was released even as OpenAI CEO Sam Altman was asking her to reconsider an official partnership. Altman even appeared to encourage the identification of Johansson as Sky, posting the word “her” on X (formerly Twitter) ahead of the launch, a seeming reference to the 2013 sci-fi romance Her, in which Johansson voices an intelligent chat system. Armed with legal counsel, Johansson explained, she pressed the company to reveal the exact steps it took to create Sky; in response, she said, they “reluctantly” agreed to take the feature down.
Welcome to a new normal for prominent public figures: not only do you have to worry about random scammers posing as you with help from AI-generated voices or images (Brad Pitt just expressed his dismay over the story of a French woman losing more than $850,000 to an impostor who set up an Instagram featuring AI-generated depictions of the actor in the hospital), but a multibillion-dollar Silicon Valley firm might try to “borrow” your valuable likeness to further their corporate agenda. With their bots already training on private data and copyrighted material, why would they stop at using famous faces?
It’s in this context that a celeb might be relieved to hear about Loti, a “likeness protection technology” company that raised a $7 million seed-funding round last fall as it continues to grow. And while co-founder and CEO Luke Arrigoni can’t name specific clients — who are represented by heavies including William Morris Endeavor, Creative Artists Agency, and United Talent Agency — he’s comfortable saying that he has consulted on a lot of high-profile incidents where an actor or musician suspects their unique attributes have been improperly duplicated in one form or another.
If there appears to be a case of unlicensed imagery, video, or audio modeled on a living figure, Loti is likely to hear from that person’s management team. “We get called, almost always, in those situations where you hear about a celebrity getting either deepfaked or used, specifically, in a large in AI product,” Arrigoni tells Rolling Stone. The work on Loti’s end may involve a technical analysis of the content in order to help lawyers understand the probability of whether a deepfake was based on their client. “For instance, what you might tell someone is, this voice has this percentage confidence match,” Arrigoni explains. “If you can sing, like, an a cappella 10-or-15-second segment, I can find when people AI-generate that voice into a different song.” Along with such evidence of potential misappropriation, he can prep legal teams for the arguments they are likely to hear from counsel for AI firms defending their products and content. But he is quick to point out that it’s his job to supply them with relevant information, “not to tell them what to do.”
Arrigoni’s relevant background isn’t in the courtroom but in data science (a self-described “math guy,” he despises the trendier buzzwords of the AI scene), and he previously spent a decade working on data questions for big brands. Loti came about first as a sort of nonprofit operation in 2022, when he and his wife, co-founder Rebekah Arrigoni, were watching the second season of HBO’s Euphoria, which involved a revenge porn plotline. At the time, he was working with a client that could perform facial recognition matches with only part of a person’s face. The couple had the idea to use such tech in order to flag instances when someone’s likeness appears online without their permission, then send copyright notices to have these removed. “We were like, ‘Well, why don’t we take this tech and then point it toward the internet, we can find where people don’t belong, and we’ll build some script to issue takedowns,’” Arrigoni recalls. “It was something we were both passionate about.”
As Arrigoni’s tech team built out this AI tool to crawl the web and Rebekah began to develop a consultancy side for it, the 2023 Hollywood strikes began, with AI being a flashpoint in the labor dispute between actors and writers and the studios’ managment. “I called friends in entertainment and said, ‘Hey, would you like this thing that I built to help in this one-off, specific situation, to find and defend against AI impersonations?” Arrigoni says. They were indeed interested, and Loti “massively expanded the scope of what we scanned,” he says. “It wasn’t just a few problem areas of the internet, it was the whole internet, and we added voice to it as well. We had a deepfake detection.”
What the Arrigonis now run is something of a leader in a wild new frontier of digital security and privacy, one that advertises its ability to algorithmically comb through “100 million images and videos per day looking for misuse of your content or likeness,” matching content to a client’s face and voice and instantly firing off copyright takedown notices for any violations. Loti boasts that its automated takedown engine has a 95 percent success rate in 17 hours, eliminating most offending material within a calendar day — meaning that unlike with the Johansson-soundalike OpenAI “Sky” voice, this stuff is typically gone before the wider public is ever exposed to it. Arrigoni says they have effectively automated a kind of work that until recently was undertaken through a far less efficient and patchwork process: fanbases would notice an unauthorized usage of their favorite celebrity’s image and send it to that individual’s management, with attorneys and agents then negotiating with platforms to remove the infringing content.
That simply doesn’t cut it in today’s accelerated environment, Arrigoni says. “You can hire your law firm to do 15 takedowns today, and 14 new things will get uploaded,” he explains. Loti, by contrast, is equipped to keep up with the deluge. “I can issue a thousand takedowns today on all 985 things that were uploaded yesterday, and a few things are uploaded this morning,” he says. Ideally, the fans never even have the chance to encounter the unlicensed content. Meanwhile, famous clients can request specific filters to catch the material most harmful to their reputation, such as explicit deepfake videos.
Of course, there is a little awkwardness in seeking to convince entertainers that AI is the solution to the AI woes that have dogged the creative landscape in recent years. Arrigoni has to convince them that Loti is simply fighting fire with fire. “It’s so cringy to be like, ‘I’m gonna have AI company,’” he jokes, adding that he is often “apologizing” for the tech itself. “A lot of those AI companies, they tend to have a greater sense of self than what they are — making the world a better place, that kind of thing. We’re not trying to pretend to be any of that.”
Arrigoni tells potential clients, “It is technically AI, what we do, but really what we’re trying to prevent is anyone using AI to destroy your income, your creativity, your brand, your reputation.” In the end, he says, many are impressed by how much Loti can catch in its net, though it’s not just themselves they want to protect. “One of the biggest draws, actually, to our product,” Arrigoni says, “is they don’t necessarily benefit by paying us to do this, but they are tired of their fans being scammed.” (Again, condolences to the woman who thought she was paying for Brad Pitt’s kidney treatment.)
Beyond big-name actors, Loti works with high-ranking military, politicians, CEOs, posthumous estates, and a number of Nashville musicians, meaning that new challenges and legal precedents will crop up in a broad range of fields. What if DreamWorks partners with an AI startup and creates an animated film character that uncannily resembles a celebrity unaffiliated with the project? Will a few of the ordinary Florida citizens who reportedly inspired scenes in the upcoming Grand Theft Auto VI band together to sue Rockstar Games? And are states moving fast enough to enact legislation such as Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act, signed into law last year in a groundbreaking effort to keep artists’ voices from being improperly cloned and deepfaked?
Whatever may come, Arrigoni wants Loti to continue to ensure that pop-up “fakes” are no longer an “economically viable” as an internet hustle: “Make it difficult, because it’s so easy right now — and that’s the part we want to change,” he says. With the AI arms race proceeding and full tilt and other competitors bound to enter the game, he can hardly expect a dull moment ahead. But at least one question is pretty well settled: your chatbot assistant won’t sound like Scarlett Johansson.