AI Experts Reveal How to Spot Generated Content

AI Experts Reveal How to Spot Generated Content

Lifestyle

For tens of millions of people on TikTok, the realization that AI might be getting better wasn’t a press release or an article — it was a video of a dozen bunnies jumping on a trampoline. 

The clip, first posted by an unknown TikTok account @rachelthecatlovers, showed a herd of bunnies descending on a suburban trampoline late at night, captioned “Just checked the home security cam and… I think we’ve got guest performers out back!” Surveillance-style videos already make up a major lane of content on the app, but what got the 230 million people who viewed it was that the video was convincing… almost. So many people were shocked that this AI video had tricked them — for even a moment — that it set off a wave of alarm across the app. 

The internet has come a long way since the most widely recognized test for AI was producing a convincing video of Will Smith eating a bowl of spaghetti — something the earliest of models simply couldn’t do. Now, with AI generated or modified photos and videos flooding social media, knowing how to spot altered or manipulated images can be the key to critical media consumption.

Rolling Stone spoke to several tech and AI experts about the best ways to spot AI content — at least for now. 

Think for literally more than one second

Manipulated or completely AI generated images can run the gamut from political misinformation to almost undetectable video edits. But according to Princeton University computer science professor Zhuang Liu, one of the easiest ways to detect AI images is just by thinking if what you’re seeing is actually possible. 

“If it’s not plausible in the real world, then it’s obviously AI generated,” he tells Rolling Stone. “For example, a horse on the moon or a chair made of avocado. So these are obviously AI generated. That’s the easiest case.” 

The next step is to check the source where you found the image. This doesn’t necessarily work for viral content, especially since they often come from previously unknown accounts, but seeing a video on a meme page could be a clue it’s not real. Checking your sources, including searching for the video on legitimate sites, or using reverse image search, can help if you’re trying to verify a photo, especially if it’s of a political nature. 

Put your art-critic hat on

Accurately identifying AI slop can be easy. But when an image or video seems plausible — that’s when you’ve really got to use other clues to try and spot AI. V.S. Subrahmanian, director of the Northwestern University Security and AI Lab, tells Rolling Stone that determining whether an image is AI generated starts with breaking down a photo into components. While the end result might seem believable, there are often clues that objects in a photo aren’t abiding by the rules of physics. Things like shadows can often be a hint that a photo was made by AI, or videos where the light source seems impossible to obtain. Another big hint is to clearly look at transitions in the photo, like where people’s bodies end and trees or background images begin. 

“We’re looking for things that are hard for a deep fake to get quite right,” he says. “Say I’m looking at a person’s ear and there’s a cluttered background behind it. AI doesn’t always realize that an ear has a sharp boundary. It has a clear end. So when it generates fakes, there might be blurring there.” 

New York University computer science professor Saining Xie adds that this kind of critical thinking can be done to videos as well. “Look for really odd details. Check for unnatural writing,” he says. “If there’s a mirror [or] water, sometimes there’s a distorted reflection, a mismatched shadow. Pause at different frames and look for glitches, distorted faces, and backgrounds.” 

Think about manipulation, not just AI generation

While fully AI generated content can be a problem, many people don’t consider that some images may be manipulated instead of created whole-cloth, which can make fakes look all the more real. One of the best examples of this is in political messaging and misinformation, which can often use real video clips but replace the audio — or keep the verified audio while slightly changing what people are doing on-screen. These micro-adjustments can be harder to spot, which is why experts say you should look for videos from multiple angles, but most importantly, be skeptical. 

“Maintain a critical-thinking mindset,” Liu says. “Verify whether the source is trustworthy and think, ‘What could be the intent of the entity who is sharing this? Is it to gain followers on social media, or is to promote some products?’ Be clear of the intent.” 

“We’re actively in a post-truth era. And we need to change our mindsets that seeing is believing,” Xie adds. “For the average internet user, the default should be skepticism.” 

Understand the bigger problem

As tech companies continue to invest billions into AI advancements, it’s abundantly clear that there might easily be a future where it becomes incredibly difficult to identify AI generated images from real photographs and videos. On Aug. 27, Google released a major upgrade to its Gemini AI photo editor, which Google has advertised as having a sophisticated rendering ability. 

“Identifying [AI] is getting harder and harder,” Xie says. “If you asked me yesterday, I’d give you a different answer. But now, the Google model has advanced to a new level. So many of these viral inspection tools might not be valid anymore.”

This is where public perception ends and corporate responsibility begins. All of the experts who spoke to Rolling Stone say that the companies behind these massively successful models have a responsibility to develop watermarking techniques that explicitly state when images were made with their models. 

“This type of authentication has to be done from this kind of image editing, the generation-provider side as well,” Xie says. “Many image generation providers don’t have this service. But I think going forward, people will care more about responsibility and safety, and [companies] will add more safeguards. I’m quite optimistic about that.” 

Liu notes that while the average consumer has been worried about identifying AI images, many companies have developed AI models that can accurately identify when an image has been generated or manipulated — but many aren’t available to the public. 

Subrahmanian agrees that tech companies have a responsibility to identify and label their AI creations. But he notes that even with changes across the board, it wouldn’t apply to people who use their own or newer developed models. “ I think the number of tech firms that are putting out algorithms to create deep fakes are actually starting to put in watermarks,” Subrahmanian. “But [malignant] actors can pick that kind of stuff up, and that’s much harder to regulate.” 

Trending Stories

There’s no good answer about how to keep the floodgates of the internet holding strong against the waves of AI images. There will be another plague of bunnies on trampolines that send apps into a panic, or a video of a political figure that convinces people on the fringes of something completely implausible. But while these developments continue, the strongest thing the average person has to combat against AI is their own critical thinking. 

“At the end of the day, a lot of the stuff that you’re seeing has been created by strangers, and you need to treat it with the same skepticism that you would treat an overt request for money from some unknown person,” Subrahmanian says. “Common sense is a vastly underrated resource.” 

Read original source here.

Products You May Like

Articles You May Like

Attackers Abuse Velociraptor Forensic Tool to Deploy Visual Studio Code for C2 Tunneling
Dress Code: Bottega | FashionBeans
‘KPop Demon Hunters’ Sequel in Development
Rejected MCU Concept Art Reveals Very Different Take On Marvel’s Most Mocked Villain
Emilie Kiser Speaks After Son’s Drowning Death: ‘Full Accountability’