Prof. Sam Lehman-Wilzig

Prof. Sam Lehman-Wilzig: The Israel-Iran War: Military (Artificial) Intelligence

Prof. Sam Lehman-Wilzig: The Israel-Iran War: Military (Artificial) Intelligence

The current Iran-Israel war is historic – perhaps revolutionary – not because of an air war leading (hopefully) to the collapse of a brutal, dictatorial regime, or the successful initial decapitation of most of Iran’s leadership, or even a result of the extraordinary coordination between the IDF and the American military. True, these are all impressive accomplishments, but not quite “historic.”

What then? The real watershed here is the widespread use of artificial intelligence by the attacking powers, as The Economist reported: “America is thought to have employed Claude, an artificial-intelligence model, to process intelligence, select targets and carry out military simulations” (https://www.wsj.com/livecoverage/iran-strikes-2026/card/u-s-strikes-in-middle-east-use-anthropic-hours-after-trump-ban-ozNO0iClZpfpL7K7ElJ2).

This is somewhat ironic because the company that developed “Claude’” – Anthropic – is in the midst of a fierce legal battle with the U.S. government regarding the latter’s refusal to permit limits on Claude’s military use. In short, America’s army at present uses Claude as the best AI program out there for military purposes, but the U.S. Department of War wants to have complete independence in how it will be employed. Anthropic opposes this, demanding inclusion in its contract with the U.S. that Claude will not be used for any illegal purposes (e.g., surveillance of U.S. citizens). For its part, the government argues that such a contractual agreement is not necessary; the government in any and all cases is bound to obey American laws regarding military matters. Other companies (e.g., OpenAI) have fewer moral compunctions or requirements that they themselves ensure the proper use of their military AI (https://www.wired.com/story/ai-model-military-use-smack-technologies).

While this overall issue might seem to be overly legalistic and technical, underlying it is the far more worrisome “slippery slope” issue: using autonomous AI in war. To put it bluntly: wouldn’t this ultimately lead to providing AI the capability to make life or death decisions in warfare without human supervision?

This paramount question applies morally to every country. However, it also represents a major geo-strategic problem in the future, especially for advanced militaries as the IDF and the U.S. armed forces.

Let me explain. The moral issue is clear to all: does humanity really want to enable non-humans to make decisions regarding the killing of other humans? True, even today most armies have semi-autonomous weaponry; but at the start of the process – the decision to fire or not – there’s always a human being. One can easily see the problematics in removing the “semi” and enabling the “autonomous” to be fully free of human input (other than the initial programming). For one, war is incredibly “messy” – it would be quite easy for an AI to make serious errors in (mis)identifying non-fighting civilians as enemy combatants, or (mis)judging the proper amount of firepower to limit civilian casualties (as seems to have happened with the American missile hitting the Iranian girls school by mistake, with dozens killed). One doesn’t have to have an overheated imagination to be able to think of other “mistakes” an AI could make in the heat (and confusion) of battle.

For the world’s most advanced armies – the U.S., NATO, China, and Israel – the problem extends way beyond this. From the beginning of time, it has always taken years of training and high-level investment in resources to train and arm professional soldiers. Israel is a classic example. True, it has some of the world’s most sophisticated armaments, but its real military advantage lies in the IDF’s highly trained (wo)manpower. The country has always been able to stay at least two steps ahead of its enemies in large part due to its military prowess – not only in skilled fighters but also in creative intelligence.

AI threatens to undermine such a critical advantage. Why? Because it’s a lot easier for a militarily minor country to develop a useful AI program (or adopt/buy it from one of the AI powers that don’t have the same moral compunctions as does Anthropic), than to invest hugely in upgrading its warmaking manpower. In other words, whereas AI today mostly increases the gap between the more economically and educationally advanced countries and their more militarily backward enemies, that will most probably change as AI programs come down in price and become military commodities, as widespread as rifles and hand grenades.

We have witnessed a similar process of “equalization” in the Russia-Ukraine War. What significantly reduced Russia’s advantage there in army size and military armaments is the newest tech on the block: Ukraine’s drones. They have effectively leveled the killing field, negating Russia’s overwhelming numerical and resource advantages. AI promises to be an even greater “equalizer” once such programs can be purchased off the shelf.

Many countries are aware of the moral issue and in coordinated fashion have started considering how to control military AI. For example, in 2023 the U.S. issued a Political Declaration on Responsible Military Use of AI that was endorsed by several nations, aiming to create international norms for the responsible development and deployment of military AI – the emphasis being on meaningful human control. Simultaneously, several global powers initiated summit meetings to discuss best AI practices. Unfortunately, China was unwilling to accept any formal constraints on developing AI which virtually guarantees an AI arms race.

The bottom line: at present, the world is watching Israel and the U.S. in fascination (some in stupefaction), battering Iran – aided significantly by their advanced AI. Nevertheless, these two militarily powerful allies might do well to heed Shakespeare’s famous aphorism in order not to suffer from the adage: “hoist with his own petard” (Hamlet, Act 3, Scene 4).

Click to comment

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 4,799 other subscribers

Categories

Archives

Verified & Secured

Copyright © 2023 IsraelSeen.com

To Top
Verified by MonsterInsights