For the past year, we have grown accustomed to seeing “Made with AI” tags appearing under digital art and hyper-realistic portraits. However, as synthetic media becomes indistinguishable from reality, the head of Instagram , Adam Mosseri, suggests we might be looking at the problem backward. Instead of just hunting down fakes, Instagram’s CEO suggests that the future of social media might involve labeling the content that is actually human.

Why Instagram might start labeling human content instead of the AI bots’

Mosseri recently shared a candid perspective on the challenges of the AI era . He noted that while platforms are currently working hard to identify and label AI-generated content, this strategy has an expiration date. As AI tools improve, they will eventually get so good at imitating reality that even the most advanced detection algorithms will struggle to keep up.

His proposed solution? A shift toward “fingerprinting” real media . The current scenario is a never-ending game of cat-and-mouse against powerful bots. But Mosseri believes it will be more practical to verify authenticity at the source. The approach would involve a “chain of custody” where camera manufacturers cryptographically sign images the moment they are captured. In this scenario, your phone or DSLR would essentially provide a “digital birth certificate” for every photo you take. This metadata should prove the image originated from a lens and not a prompt.

A plan to save “real” media from the flood of AI slop

The potential shift reflects a broader change in how we perceive digital credibility. If the internet becomes flooded with “synthetic everything,” the value of human-generated content increases. However, so does the difficulty of proving its origin. Mosseri suggests that the era of the “polished” and perfect Instagram feed is essentially over because AI can now replicate that aesthetic with ease.

Interestingly, he hints that the new currency of trust might be “rawness.” In a world of perfect AI filters, showing up in “unflattering” or raw ways could become a primary signal of being human. If a photo looks a bit too messy or spontaneous for an algorithm to dream up, it gains a level of credibility that a polished studio shot might lose.

Currently, the technical details of universal cryptographic signing remain a work in progress. However, we may soon live in a digital landscape where being “real” isn’t assumed—it has to be verified. As we navigate 2026, the question won’t just be “Is this AI?” but rather, “Can you prove this is human?”