by Tim Leogrande, BSIT, MSCP, Ed.S.

12 MARCH 2026 • 6 MIN 31 SEC READ

03-17-26 Audio.mp3


Social media influencers with access to generative AI are earning revenue from an unprecedented flow of misinformation concerning the US-Israel war with Iran. A recent BBC investigation revealed several instances of AI-generated videos and fake satellite imagery being used to make inaccurate and deceptive statements about the conflict, and these images have received hundreds of millions of views.

<aside> 💡

What used to require expensive film production techniques and equipment can now be done in minutes using AI tools, because the barrier to producing realistic war footage has essentially evaporated.

</aside>

On February 28, the United States and Israel began operations in Iran. Iran retaliated by attacking Israel, several nearby Gulf states, and US military installations in the area with drones and missiles. In order to find and disseminate the most recent information — and to help make sense of a highly fluid situation — many people have turned to social media for updates.

This week, the platform X announced that if producers upload AI-generated videos of armed conflict without a label, they will be temporarily banned from its monetization program. This is a signal that social media companies are starting to realize that AI-generated misinformation is a problem. X pays qualifying users who publish content that receives a lot of views, likes, shares, and comments.

A typical example of an AI-generated video appears to depict missiles hitting the Israeli city of Tel Aviv as explosions can be heard in the background. A red "AI-generated" label appears on two screenshots of the fake video.

IMG_4376.webp

Over 300 postings have included this video, and those posts have been shared tens of thousands of times on social media. To verify the authenticity of the video, several X users turned to the platform's AI chatbot Grok. However, the bot falsely claimed that the AI-generated footage was authentic.

Tens of millions of people have watched another phony video that appears to show the Burj Khalifa skyscraper in Dubai in flames with a mob of people dashing toward it. At a time when locals and visitors were deeply concerned about drone and missile strikes on the city, this AI-generated video quickly went viral.

IMG_4377.webp

Fake videos like these erode people's trust in the information they see online, and make it more difficult to follow and document real war footage. On the first day of the war, the BBC confirmed several authentic videos of Iranian drone and missile attacks on the US Navy's Fifth Fleet headquarters in Bahrain. The next day, a fake image purporting to depict significant damage to the base was posted on X by the state-affiliated publication The Tehran Times. The fake image appears to be based on actual, publicly accessible satellite footage of a US naval facility in Bahrain from February 2025. The photo (below) was created or altered using a Google AI technique, according to Google's SynthID watermark detection.

Three automobiles parked outside are in the exact same place in both the actual satellite imagery (left) and the AI picture (right), despite the photographs apparently having been taken a year apart.

Three automobiles parked outside are in the exact same place in both the actual satellite imagery (left) and the AI picture (right), despite the photographs apparently having been taken a year apart.