menu-control
The Jerusalem Post

Is the IDF’s AI revolution a technology or ethics issue? - analysis

 
 Soldiers of the IDF's Shahar Unit. (photo credit: IDF SPOKESPERSON'S UNIT)
Soldiers of the IDF's Shahar Unit.
(photo credit: IDF SPOKESPERSON'S UNIT)

Bloomberg published a report about AI systems used by the IDF which multiplies how many targets the IDF can study and strike.

Bloomberg published a report on Sunday discussing the IDF’s “Fire Factory” artificial intelligence platform used in conjunction with another AI platform to vastly multiply how many targets the IDF can study and strike simultaneously.

The Jerusalem Post published an extended report on the same target bank and AI systems in April following a visit to the actual Fire Factory commander’s office, IDF Col. “S.”

The Post report delved deep into how the AI target bank and platform has altered the balance of any battle the IDF might fight with Hezbollah or Hamas, such as since now the IDF can double the number of targets in the target bank over the course of the conflict, instead of running out of targets.

In addition, the Post report traced the evolution in the use of AI over the years, and especially since 2019 and 2021.

Advertisement

What was new in the Bloomberg report in which they interviewed IDF Col. “Uri” from the Information Technology Division was the more detailed exploration of ethics.  

IDF recruits at the Military Intelligence language school (credit: IDF SPOKESPERSON'S UNIT)
IDF recruits at the Military Intelligence language school (credit: IDF SPOKESPERSON'S UNIT)

Bloomberg’s report said, “While both systems are overseen by human operators who vet and approve individual targets and air raid plans, according to an IDF official, the technology is still not subject to any international or state-level regulation.”

“Proponents argue that the advanced algorithms may surpass human capabilities and could help the military minimize casualties, while critics warn of the potentially deadly consequences of relying on increasingly autonomous systems,” said the report.

The Post did engage top IDF and other relevant government officials on ethics issues, with the officials strongly arguing exactly what Bloomberg quotes them as saying above: humans still make the decisions and the overwhelming ethical impact of the target bank and AI platform is to be more precise and to reduce mistakes so as to harm fewer civilians.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


However, the Bloomberg report goes another level deeper into presuming an operation where the AI target bank does actually make an error that kills civilians.

Who then can be probed or prosecuted?

The truth is there is no answer to this question.

Advertisement

Prosecutors could go after S as the commander responsible for the target bank, but his background was more as an intelligence manager and it would be hard to hold him accountable for technological failures beyond his expertise.

In contrast, Col. Uri and other technology units might have greater responsibility for the technological aspects of the platform which might have directly or indirectly led to the error.

But these technologists are more involved at the manufacturing stage and may have nothing whatsoever to do with the actual targeting decision, so how can they be held accountable?  

Asked to explain how he could police an AI platform that moves too fast and in too complex a manner for a human brain to follow, he stated, “Sometimes when you introduce more complex AI components, neural networks and the like, understanding what ‘went through its head,’ figuratively speaking, is pretty complicated.”

“And then sometimes I’m willing to say I’m satisfied with traceability, not explainability. That is, I want to understand, what is critical for me to understand about the process and monitor it, even if I don’t understand what every ‘neuron’ is doing,’” Uri told Bloomberg.

All of what Col. Uri said comes down to the fact that it would be very difficult to argue in a criminal setting that he fully understood what and why the AI made a decision. He might be able to trace the different key data points that the AI went through before it reached its decision, but that would likely fall far short of being a basis for prosecution.

And the actor who would be least prosecutable would be the drone pilot who pulls the trigger because he is just carrying out a decision that he knows was approved by both the AI and an IDF intelligence officer.

If there is no way to prosecute anyone, neither the trigger puller, nor his commander, nor the technological architect, then isn’t that a convenient construction of a paradigm that makes accountability impossible?

Once again, the IDF and other government officials told the Post that mistakes have radically dropped due to the precision of the AI.

Earlier this month, over 20 drone airstrikes were carried out in Jenin, killing only 12 Palestinians, all of whom were combatants, and not a single civilian.

The August 2022 and May 2023 conflicts with Gaza also had minuscule numbers of civilians killed compared to prior conflicts, and in most cases where civilians were killed, IDF intelligence and legal officers knew the risks and approved them on the basis of the military advantage overcoming the harm to a limited number of civilians (such as family members of top terrorists.)

These were not AI errors.

Yet, at some point, there will likely be an AI-led error that kills civilians, even if unlikely nightmare "Terminator" movie-style scenarios of AI's turning anti-human never transpire.

If the IDF wants to avoid a spike in war crimes charges, it should already start working out a format for holding someone accountable in such a case, with like-minded countries in the US and EU being willing to recognize the process.

×
Email:
×
Email: