by Tim Leogrande, BSIT, MSCP, Ed.S.

πŸ—“Β MAR 12 2026 β€’ 5 MIN 12 SEC READ


Social media influencers with access to generative AI are earning revenue from an unprecedented flow of misinformation concerning the US-Israel war with Iran. A recent BBC investigation revealed several instances of AI-generated videos and fake satellite imagery being used to make inaccurate and deceptive statements about the conflict, and these images have received hundreds of millions of views.

<aside> πŸ’‘

What used to require expensive film production techniques and equipment can now be done in minutes using AI tools, because the barrier to producing realistic war footage has essentially evaporated.

</aside>

On February 28, the United States and Israel began operations in Iran. Iran retaliated by attacking Israel, several nearby Gulf states, and US military installations in the area with drones and missiles. In order to find and disseminate the most recent information β€” and to help make sense of a highly fluid situation β€” many people have turned to social media for updates.

This week, the platform X announced that if producers upload AI-generated videos of armed conflict without a label, they will be temporarily banned from its monetization program. This is a signal that social media companies are starting to realize that AI-generated misinformation is a problem. X pays qualifying users who publish content that receives a lot of views, likes, shares, and comments.

A typical example of an AI-generated video appears to depict missiles hitting the Israeli city of Tel Aviv as explosions can be heard in the background. A red "AI-generated" label appears on two screenshots of the fake video.

IMG_4376.webp

Over 300 postings have included this video, and those posts have been shared tens of thousands of times on social media. To verify the authenticity of the video, several X users turned to the platform's AI chatbot Grok. However, the bot falsely claimed that the AI-generated footage was authentic.

Tens of millions of people have watched another phony video that appears to show the Burj Khalifa skyscraper in Dubai in flames with a mob of people dashing toward it. At a time when locals and visitors were deeply concerned about drone and missile strikes on the city, this AI-generated video quickly went viral.

IMG_4377.webp

Fake videos like these erode people's trust in the information they see online, and make it more difficult to follow and document real war footage. On the first day of the war, the BBC confirmed several authentic videos of Iranian drone and missile attacks on the US Navy's Fifth Fleet headquarters in Bahrain. The next day, a fake image purporting to depict significant damage to the base was posted on X by the state-affiliated publication The Tehran Times. The fake image appears to be based on actual, publicly accessible satellite footage of a US naval facility in Bahrain from February 2025. The photo (below) was created or altered using a Google AI technique, according to Google's SynthID watermark detection.

Three cars parked outside are in the exact same place in both the actual satellite imagery (left) and the AI picture (right), despite the photographs allegedly having been taken a year apart.

Three cars parked outside are in the exact same place in both the actual satellite imagery (left) and the AI picture (right), despite the photographs allegedly having been taken a year apart.

A rising number of well-known AI systems, such as OpenAI's Sora model, the Chinese AI software Seedance, and Grok, which is integrated into X, include Google's AI technologies, as well as its video generator Veo. The number of different tools now available to create a wide range of highly realistic AI imagery is unprecedented.

This fake image (right) of a huge explosion at a US base in Iraq has been manipulated using AI based on a real image (left) showing a much smaller cloud of smoke.

This fake image (right) of a huge explosion at a US base in Iraq has been manipulated using AI based on a real image (left) showing a much smaller cloud of smoke.

Many of the accounts disseminating these kinds of AI-generated videos were attempting to "game monetization" by posting content that will generate a lot of engagement in exchange for payment through the app's Creator Revenue Sharing program. The number of accounts enrolled in the program and their earning potential are not disclosed by the site, but estimates indicate that X might pay between eight to twelve dollars per million verified user impressions. Creators must garner five million organic impressions in three months, plus hold an X premium subscription, to be eligible for the program. Once they are in, however, viral AI-generated content is basically a license to print money.

The key issue is that engagement-driven monetization and accurate information are fundamentally at odds. No platform has fully resolved this tension or perhaps ever will; despite the fact that many social media companies claim they are attempting to alter their moderation and detection systems to address the scale and speed at which AI-generated content proliferates.

Ultimately, the surge of AI-generated war footage highlights a troubling shift in how information spreads during global crises. When realistic videos and images can be produced in minutes, the line between authentic reporting and fabricated spectacle becomes increasingly difficult to see.

<aside> πŸ’‘

In fast-moving conflicts, where people rely on social media for immediate updates, misinformation can spread far faster than verification efforts. The result is an information environment where confusion, fear, and speculation can easily overwhelm reliable reporting.

</aside>