Join our FREE personalized newsletter for news, trends, and insights that matter to everyone in America

Newsletter
New

Are We Reaching "peak Video" For Trust? The Osint/social Engineering Nightmare Of 2026.

Card image cap

I've been digging into OSINT work for the past few weeks, and honestly, it's starting to feel pretty unsettling. The old ways we used to verify things visually just aren't working like they used to.

Remember a year ago? You could usually spot a deepfake by looking for lighting that didn't match up, ears that looked a bit off, or that telltale unnatural blinking. But the newest AI models have pretty much fixed those obvious giveaways. Now we're seeing fake videos and audio that can actually pass the real-time checks in regular business video apps.

From a security and threat intelligence standpoint, there are three big changes that really concern me.

First, there's what I call the "Liar's Dividend." This is the strangest part. It's not just that fakes are getting better. It's that real, damaging footage is now being written off as "just AI" by the people it exposes. We're ending up in this weird place where nothing feels solid or trustworthy anymore.

Second, there's contextual impersonation. Attackers aren't just copying someone's face anymore. They're gathering private photos and videos from social media to train AI models on specific habits and even office backgrounds. Imagine getting a Teams call from your "CEO" who appears to be sitting in their actual home office. Whether it's real or fake, that kind of detail makes business email scams way more likely to succeed.

Third, there's OSINT pollution. Trying to verify a photo through reverse image search is becoming a real challenge. When so much of what you see online is high-quality AI-generated content, trying to figure out if a photo is real by looking at buildings or landmarks takes forever and often leads you down the wrong path.

I've been exploring ways to at least slow down the automated side of this problem those scrapers that collect all the data to train these models in the first place. I've been testing a tool called AI Blocker recently. It's one of the few things I've found that actually tries to identify and block traffic from these specialized scraping bots before they can gather a website's images and videos for training.

But here's the thing—that only protects what we control ourselves.

submitted by /u/CountySubstantial613
[link] [comments]