A picture of smoke rising from a building close to the Pentagon in Washington surfaced on social media in May 2023. The picture was sufficiently convincing; the background building had enough architectural familiarity, the colors seemed appropriate, and the framing resembled news photography. It spread swiftly. Before users finally linked the image to its true source—a generative AI system—U.S. stock markets saw a noticeable decline. There was no explosion, no damage, and nothing genuine. The entire episode took several hours to complete. It was a minor incident by the standards of what’s to come. However, it proved that synthetic media doesn’t have to be complex to have real-world repercussions, a point that scholars and decision-makers have been pointing out ever since. All it has to do is be quick.
Three years later, it’s hard to fully comprehend how much has changed. For the first time in the history of the internet, most newly uploaded images are not taken with a camera. They are produced. A growing portion of the visual content that appears in social media feeds and search results, including memes, ads, product photos, virtual influencers, avatars, and brand campaigns, was created by an AI model rather than a photographer, graphic designer, or videographer. In about four years, what started out as an experimental technology accessible only to researchers with expensive hardware has evolved into the fundamental visual infrastructure of digital communication. Synthetic media is not the only thing that the internet has made possible. It has started to rely on it in significant ways.
| Key Information | Details |
|---|---|
| Concept | Synthetic Media — content partially or entirely generated by AI or machine learning |
| Includes | Deepfakes, AI-generated images, video, audio, virtual influencers, synthetic news anchors |
| Key Incident | May 2023: Fake AI-generated Pentagon explosion image briefly moved U.S. stock markets |
| Scale (2026) | Majority of new images appearing online are now generated, not photographed |
| Key Detection Method | C2PA (Coalition for Content Provenance and Authenticity) — supported by Adobe, Microsoft, OpenAI |
| Regulation (EU) | EU AI Act requires synthetic content to be machine-readably marked as artificial |
| Regulation (US) | NIST AI Risk Management Framework; 2023 executive order on government watermarking |
| UNESCO Warning | “Synthetic reality threshold” — point where humans can no longer distinguish authentic from fabricated |
| Prebunking Research | Exposure to weakened misinformation examples significantly improves resistance — proven in large-scale trials |
| Case Study | Taiwan — combined rapid fact-checking, platform cooperation, and civic education against disinformation |
| Key Risk | False content travels faster and farther than corrections; detection lags behind generation |
| Reference Website | UNESCO — Deepfakes and the Crisis of Knowing |
Significant ramifications for human information processing are still being worked out in real time. According to UNESCO, there is a “synthetic reality threshold” at which people can no longer consistently discern between real and fake media without the aid of technology. In certain domains, there is a legitimate claim that the threshold has already been exceeded. Deepfake video quality has improved to a degree that even careful, attentive viewers are frequently fooled, and audio cloning tools can produce convincing voice replicas from a few seconds of recorded speech. The hypothetical situation described in the UK’s Information Commissioner’s Office tech horizons report—a finance officer almost completing a fraudulent wire transfer because an AI-generated video call accurately imitated their company’s finance director—is no longer fiction in any significant sense. Law enforcement is already reporting variations of that situation.
The asymmetry between creation and detection is what complicates this particular moment. Convincing synthetic media generation is now quick, inexpensive, and available to regular people. Defenders are continuously losing the arms race to detect it at scale and with reliability. Automated detection systems search for discrepancies, such as the subtle discrepancy between lip movement and sound or the obvious line separating fake and real material. However, as generative models advance, these discrepancies become more difficult to identify. Generating capability is always ahead of moderation systems. Additionally, false or emotionally charged content spreads more quickly than debunking, particularly in politically divisive settings, which works against correction. The original item has frequently finished most of its journey through the information ecosystem by the time a detection system flags something as synthetic.
As this develops, there’s a sense that the detection-first strategy, which focuses on developing better tools to spot phony content after it emerges, has reached its structural limit. Instead of focusing on detection, researchers are increasingly promoting provenance as the more promising approach. Adobe, Microsoft, OpenAI, and other organizations support the Coalition for Content Provenance and Authenticity (C2PA), a technical standard that incorporates cryptographic metadata into digital content from the time of creation, creating a verifiable chain of origin that travels with the file. The EU AI Act now mandates that manufacturers of artificial intelligence (AI) systems that produce synthetic content label their outputs as artificial in a machine-readable format. There are watermarking initiatives, but there are concerns about how reliable they will be in practice because the current watermarking tools are still susceptible to manipulation and can deteriorate media quality.
Rather than technology, psychology is the most surprising discovery to come out of recent studies. In large-scale randomized trials across various ideological groups, prebunking—exposing people to weakened examples of manipulation techniques before they encounter the real thing, acting as a sort of psychological inoculation—has demonstrated quantifiable efficacy. The most comprehensive real-world case study is Taiwan, which achieved significant resistance to coordinated disinformation campaigns during recent elections by combining quick fact-checking with public education, platform cooperation, and civic engagement. Taiwan’s success, according to analysts, was dependent not only on technology but also on a populace that was already digitally literate and ready to deal with manipulation. For nations that have not made the necessary preparations, the lesson is unsettling.
The conflict between the real and significant creative and economic value of synthetic media and the equally real risks it poses to public trust, democratic discourse, and individual privacy remains unresolved. It is neither feasible nor desirable to do away with synthetic media. However, reaching a practical balance between scale and verification, between generation and accountability, is still a work in progress. This era began without a map for the internet. Regulators, technologists, educators, and regular users are all being asked to draw one in real time while the territory is constantly changing.

