Sabine Sterk: When Artificial Intelligence Turns War Into Fiction
From Gaza to global conflict zones, AI-generated images are distorting public perception, manufacturing false memories, and threatening the integrity of historical truth.
In war, truth has always been contested. Today, however, it is not only contested it is also synthetically manufactured.
The war in Gaza has revealed a new and dangerous front in global conflict: the use of artificial intelligence, generated images presented as historical reality. These images do not merely misinform; they reshape memory, distort judgment, and threaten the integrity of history itself. Israel, long subjected to narrative warfare, now finds itself facing a technological escalation that affects not only this conflict, but the very way modern societies understand war.
This is not about technology in the abstract. It is about truth, accountability, and moral responsibility.
The Authority of Images and the Collapse of Trust
For over a century, photographs have carried evidentiary weight. A photograph implied a photographer, a moment, a place, and a witness. Even manipulated images had origins that could be investigated.
Artificial intelligence breaks that chain entirely.
An AI-generated image has no camera, no photographer, no timestamp, and no physical reality behind it, yet it looks indistinguishable from real war photography. When such images circulate on social media or appear in activist campaigns without clear disclosure, they acquire false authority. Viewers do not ask whether the image is real; they assume it is.
Emotion precedes verification.
Gaza and the Rise of Synthetic Outrage
The war in Gaza has become one of the most visually saturated conflicts in modern history. Alongside legitimate journalism and authentic documentation, there is now a flood of AI-generated visuals depicting destruction, suffering, and alleged atrocities.
Some of these images are shared out of ignorance. Others are shared deliberately to provoke outrage and assign blame before facts can be established. In this environment, Israel is judged not by military conduct, international law, or verified evidence, but by synthetic scenes designed to elicit moral certainty.
This does not deny civilian suffering. Civilians suffer tragically in every war. But when suffering is digitally fabricated or exaggerated, it ceases to be testimony and becomes instrumentalized emotion.
From Misinformation to Manufactured Memory
The most dangerous consequence of AI-generated war imagery is not immediate misinformation, but long-term historical distortion.
Repeated exposure to fabricated images creates false collective memory. People begin to “remember” scenes that never occurred, events that never happened as depicted. Over time, these images become part of the mental archive through which future generations understand the conflict.
History is no longer written; no, it is rendered.
Once memory is shaped by fiction, correcting it becomes nearly impossible.
A Global Threat, Not an Israeli Exception
Although Israel is a prime target of narrative warfare, this danger extends far beyond the Middle East.
AI-generated historical visuals are already appearing in: The war in Ukraine, conflicts in Syria, Yemen, and Sudan, alleged massacres in Africa and Asia and protest movements and civil unrest in Western democracies.
In each case, AI imagery simplifies complex realities into emotionally charged morality tales. Context disappears. Responsibility is assigned visually rather than factually. History becomes aesthetic ideology.
This is not progress. It is a regression.
Bias Embedded in Code
Artificial intelligence is not neutral. It is trained on existing data, data shaped by political bias, cultural framing, and selective narratives.
When AI generates “historical” images, it often reinforces stereotypes and ideological assumptions. For Israel, a nation already subjected to persistent double standards and visual demonization, this is especially dangerous. Old tropes are reborn with digital polish.
Technology does not eliminate bias. It automates and scales it.
The Collapse of Historical Standards
Serious history depends on standards: primary sources, provenance, corroboration, and context.
AI-generated images offer none of these.
They have no evidentiary value, no accountability, and no verifiable origin. Yet when treated as documentation, they erode trust not only in images, but in all historical evidence. Ironically, this benefits extremists and denialists most, those who thrive on the claim that nothing can be proven.
When everything can be fake, truth itself becomes optional.
Ethics and the Exploitation of Suffering
There is also a moral line that must not be crossed.
Generating hyper-realistic images of dead children, grieving families, or devastated civilians risks turning real human suffering into a digital spectacle. Trauma should not be simulated for engagement. Victims should not be reduced to prompts.
War is not content. Suffering is not a tool.
Why This Matters Profoundly for Israel
Israel is not only defending itself militarily. It is defending its legitimacy, its history, and its moral standing as a democratic state operating under the laws of armed conflict.
When artificial images replace evidence, international discourse collapses into emotional reaction rather than legal or ethical analysis. A democracy cannot be judged by fiction.
If this standard is accepted for Israel, it will soon be applied everywhere.
A Necessary Line Forward
AI-generated images must never be presented as historical evidence.
If used at all, they must be:
-
Clearly labeled as AI-generated
-
Used only for abstract illustration
-
Never depicting real individuals or alleged crimes
-
Always accompanied by verified sources
Anything less is not education. It is deception.
History belongs to evidence, not algorithms.
If artificial intelligence is allowed to overwrite memory, we will lose more than Israel’s story. We will lose the possibility of truth in war anywhere, for anyone.
And without truth, justice becomes impossible.
