Skip to main content
Currently being edited in London

Daily inbox intelligence from Monocle

Balancing ethics and efficiency: What is AI’s place in modern wars?

The Pentagon-Anthropic dispute strikes at the heart of the moral minefield that is AI weapons systems.

Writer

When, in early March, the Pentagon designated AI firm Anthropic a “supply-chain risk”, the dispute was less a procurement issue than a confrontation over the morality of modern warfare. The US Secretary of defence, Pete Hegseth, and others castigated Anthropic after its CEO, Dario Amodei, refused to relax guardrails that prevent fully autonomous lethal uses, even as its Claude software was reportedly being used to strike targets in Iran. Today, four types of AI systems are used by militaries: guidance allowing a drone to reach its target when communication is lost; automatic recognition of vehicles; navigation without reliance on GPS; and software that plans routes and co-ordinates multiple units. The goal is for these tools to support commanders rather than replace human judgement.

The fight between the Pentagon and Anthropic matters because it exposes where human judgement sits in the loop of AI-driven operations. Analysts use the shorthand “human-in/on/out-of-the-loop” to describe whether people decide, advise or are effectively bypassed by AI. That tension between principled guardrails and battlefield expedience is now a central policy problem for governments and militaries the world over.

Ukrainian soldiers
Ukraine is using anti-drone guns to neutralise Russian aerial threats (Image: Viktor Fridshon/Global Images Ukraine via Getty Images)

Major powers are pouring money into systems that promise to compress the “kill chain”: fusing satellite imagery, signals and sensor feeds into near-instant target recommendations. But investment is not the same as operational maturity. So far, 21st-century conflicts have chiefly used AI to amplify intelligence, surveillance and reconnaissance, not to field fully independent killer systems.

In Ukraine, AI systems have been used to speed signal processing, classify equipment from drone feeds and stitch those tags into targets. Platforms such as Project Maven, Palantir’s Gotham and AIP, as well as Ukrainian ones such as Delta and Kropyva, have enabled real-time battlefield awareness and rapid target identification. This has been about operational necessity. Ukrainian forces have strong incentives to push towards greater autonomy at the platform level because once communications are jammed or severed, remotely controlled systems become ineffective.

In the Israel-Gaza conflict, meanwhile, the IDF has integrated AI-enabled systems, such as The Gospel and Lavender, into its targeting processes. These platforms sift through intelligence to generate and prioritise strike targets at scale. While they have sped up combat operations, especially in densely populated areas, critics warn that this acceleration can lower the bar for using lethal force and increase the risk of civilian harm. The faster pace can also compress verification processes, meaning that errors might translate into strikes more quickly. The issue is not just whether humans remain in the loop but how meaningful that control is under conditions of great speed and scale.

This moment is a real test for multilateral regulation. Agreeing on limits on AI-powered military targeting will be difficult at a time of increased strategic rivalry. Some states, such as the US and China, appear more willing to push autonomy further into operational use. Others, including many European states, remain cautious for legal, ethical and political reasons. What states choose to do will shape norms, procurement decisions and what counts as acceptable risk on the battlefields of tomorrow.

Monocle Cart

You currently have no items in your cart.
  • Subtotal:
  • Discount:
  • Shipping:
  • Total:
Checkout

Shipping will be calculated at checkout.

For orders shipping to the United States, please refer to our FAQs for information on import duties and regulations

All orders placed outside of the EU that exceed €1,000 in value require customs documentation. Please allow up to two additional business days for these orders to be dispatched.

Not ready to checkout? Continue Shopping