The U.S.–Israel campaign against Iran has rapidly become a high-tempo, multi-domain war in which speed is treated as decisive, but speed is exactly where AI without guardrails becomes most dangerous. Experts warn that AI "shortens the kill chain" by compressing the path from target discovery to strike execution, creating "decision compression" in which human commanders and military lawyers may have only minutes or seconds to evaluate machine-generated options rather than genuinely scrutinise them. Reporting has also indicated the Pentagon used Anthropic's AI services during the strike campaign, even as key details about how those tools were applied remained ambiguous, a gap that matters to the industry because opaque AI use can make errors harder to detect and accountability easier to evade. In parallel, this conflict is already generating macro spillovers, including oil shipping disruption, flight-route chaos, and rising transport costs, showing how algorithm-accelerated operations don't stay "tactical" for long but turn into a systemic risk amplifier for global commerce.
Away from the blast radius, AI is also altering the conflict through cyber operations and synthetic media, and here "no guardrails" looks like a contagion mechanism. As coordinated cyber and space effects degrade communications and sensor networks, the incentives grow to automate reconnaissance, phishing, disruption, and influence, because the lowest-cost actions can be repeated at a massive scale. U.S.-linked cyber activity was reported alongside initial strikes, while threat analysts describe escalating phishing campaigns and hacktivist activity that can spill over into regional governments, logistics hubs, and civilian infrastructure, exactly the kinds of targets that can trigger panic buying, supply interruptions, or politically convenient crackdowns. At the same time, the information environment is being flooded with AI-generated and repurposed "battlefield proof", with viral fake clips and altered images circulating on X, often boosted by monetisation dynamics around blue-check accounts, and BBC Verify has documented AI fakes reaching mainstream audiences, while even chatbots sometimes vouch for fabricated content, eroding trust when the public and investors most need reliable signals.
The ultimate danger is that once major powers normalise AI-mediated targeting, "embedded" cyber, and scalable synthetic-media campaigns under wartime pressure, those practices become exportable precedents, invoked elsewhere as justification for weaker oversight, broader surveillance, and more permissive rules of engagement, with rivals and smaller states arguing they cannot afford restraint if adversaries are automating aggression. Legal and humanitarian warnings are converging on the same point: autonomy and unreliable AI systems create an escalation risk and make compliance with the laws of war harder, which is why the ICRC emphasises the need to preserve meaningful human control and calls for new legally binding rules, while legal scholars warn that corporate "guardrails" can be treated as negotiable frictions rather than hard constraints when states demand unrestricted lawful military use. In market terms, this is how an Iran-centred conflict can become a template that spreads not just through oil and shipping shocks but through the global diffusion of AI tactics, automated influence, cyber disruption, and decision-compression doctrines that can destabilise elections, accelerate arms procurement, and reprice geopolitical risk across unrelated regions.
Sources: The Guardian, Reuters, WIRED, BBC
Photos: Unsplash
Written by: Ariff Azraei Bin Mohammed Kamal