A Deepfake of Israel's Former Defense Minister Was Caught Mid-Broadcast. The Next One Won't Be.
An Israeli TV anchor caught the Gallant deepfake mid-broadcast. 'This is cooked,' she said. The video showed Israel's former defense minister speaking Hebrew with a Persian accent. It was the first deepfake caught on live television during a war. It will not be the last.

Channel 14, an Israeli news network, aired a video of former Defense Minister Yoav Gallant during its evening newscast. The video appeared to show Gallant making statements about the Iran campaign. The anchor paused. "This is cooked," she said on air. Gallant spoke Hebrew with a Persian accent. The lip-sync was slightly off. The background was generated.
The video had entered the news production pipeline through a social media account that mimicked an IDF spokesperson format. It was flagged by the anchor's instinct, not by any automated detection system. An internal investigation was launched. No arrests.
This is the "liar's dividend" in action. The existence of deepfakes means that real video evidence can now be dismissed as fabricated, and fabricated video can momentarily pass as real. Both sides benefit from this ambiguity. Both sides exploit it.
The scale of AI-generated disinformation in the first three weeks: EDMO documented 592 fact-checks of AI-generated or manipulated content. The NYT found 110+ AI-generated pieces in the first two weeks. Cyabra documented 145 million+ views from a single pro-Iran campaign. BBC Verify identified top-3 AI videos exceeding 100 million views combined. A Basij-linked network of 289 accounts on X generated 18+ million views. Iranian TikTok AI content exceeded 100 million views.
X premium accounts spread 77% of the identified disinformation. Only 2 of 34 flagged posts received community notes.
Why is this the first AI war for disinformation?
Previous wars had propaganda. This war has industrial-scale synthetic media. The difference is production cost and speed. A propaganda poster took days to design and distribute. A deepfake video takes minutes to generate and reaches millions in hours. The information black hole inside Iran (1-4% internet) means Iranians cannot verify what's real. The information flood outside Iran means everyone else drowns in content they can't verify either.
The deepfake threat is bilateral. Iran produces AI content targeting Israeli and American audiences (the Gallant video, fake CENTCOM statements, fabricated casualty footage). Israel and US-aligned actors produce AI content targeting Iranian audiences (fake IRGC surrender announcements, fabricated protest footage, manipulated satellite imagery showing damage that doesn't exist). Both sides use the same tools (open-source image generators, voice cloning, video synthesis).
The anchor who caught the Gallant deepfake was a human. The detection was instinct, not algorithm. In a media environment where AI detection tools consistently lag behind AI generation tools, the last defense against disinformation is the training and judgment of individual journalists. Automate that away and you automate away the only filter that works.
FAQ
Can deepfake detection tools keep up?
No. Detection tools (GPTZero, Originality.ai, Microsoft Video Authenticator) are consistently 6-12 months behind generation tools. The tools work on yesterday's deepfakes, not today's. The only reliable detection is metadata analysis (provenance, upload source, distribution pattern) combined with human editorial judgment. The Gallant deepfake was caught by a person, not a program.
Could a deepfake start a war?
The Gallant video was caught before it could influence decision-making. But a deepfake of a head of state ordering an attack, broadcast through a hacked media channel during a crisis, could trigger a military response before verification is possible. The decision cycle for missile defense (minutes) is shorter than the verification cycle for video authenticity (hours). This asymmetry is the deepfake's real weapon.
Why does X have the worst disinformation problem?
X's premium accounts (paid subscribers) receive algorithmic amplification. This means that any account willing to pay $8/month gets distribution advantages regardless of content quality. Iranian and Russian bot networks exploited this by purchasing premium accounts. Combined with the near-elimination of content moderation staff, X became the primary distribution channel for state-sponsored disinformation.



