Israeli forces display power of AI, but it’s a double-edged sword
24.04.2024 / 07:08 | Aktualizováno: 24.04.2024 / 07:17
Mr. Michael Raska, Czech expatriate, is Assistant Professor in the Military Transformations Programme at the S. Rajaratnam School of International Studies, Nanyang Technological University in Singapore. Recently, he published an interesting article in The Straits Times on the role of AI in military systems, particularly on the example of the Israeli Defence Forces.
Integrating AI in defence is a complex issue that countries like Singapore must contend with. Israel’s recent operations show why.
On April 13, the Israel Defence Forces (IDF), in collaboration with the US, British, French and Jordanian air forces, successfully intercepted over 300 incoming drones and missiles launched from Iran. Credit for this should go to Israel’s multilayered air defence systems, which use artificial intelligence (AI) algorithms to track and intercept missiles in real time.
These AI algorithms enable split-second decision-making and target allocation for Israel’s air defences – the Arrow missile defence system, David’s Sling, and Iron Dome systems.
But AI can also be used when a nation goes on the offensive – and this has raised some serious concerns. For example, media reports by Israeli publications +972 Magazine and Local Call have shed light on Project Lavender, an AI-powered database allegedly used by the IDF to identify bombing targets in Gaza with minimal human oversight.
According to the report, Project Lavender uses AI to process vast amounts of data to generate potential targets for air strikes, including individuals affiliated with Hamas and Islamic Jihad. The report claims that Lavender’s targets played a significant role in Israel’s military operations in Gaza, particularly during the initial weeks of a conflict that resulted in massive casualties among Palestinians.
The IDF has refuted the claims made in the media report, stating that Lavender is not a system designed to target civilians but rather a database used to cross-reference intelligence sources and gather information on military operatives of terrorist organisations.
But the controversy underscores the challenges of using AI in military operations, including concerns about transparency, accountability, and ethical decision-making in conflict situations.
The shift towards AI
While the specifics of Project Lavender remain shrouded in secrecy, sources suggest that the IDF’s strategies are increasingly being driven by AI.
Reputable Israeli military publications such as The Dado Centre Journal and Ma’arachot have cited key documents to outline the IDF’s strategic vision. This involves leveraging AI to usher in autonomous and smart transformations within the IDF, fundamentally reshaping the character and conduct of warfare.
From the IDF’s perspective, AI technology is not just a valuable intelligence tool but also a crucial force multiplier, especially in response to the evolving technological and strategic capabilities of its adversaries.
This shift in mindset repositions Israel from a paradigm of “asymmetric warfare” against perceived “inferior forces” to confronting “well-organised, well-trained, well-equipped rocket-based terror armies”.
In practical terms, AI empowers the IDF to assimilate intelligence from diverse warfare domains and disseminate it efficiently across various combat units. This enables unmanned systems to be deployed for highly precise and potent military strikes.
The AI revolution has seen the IDF tweak its operational methodologies, leading to organisational restructuring too.
The IDF’s Momentum Plan unveiled in 2020, for instance, established the Digital Transformation Administration – a centralised “operational internet” platform to streamline communication and connectivity across the IDF.
Simultaneously, the Intelligence Directorate serves as a pivotal hub for AI integration, spearheading initiatives described in the Project Lavender – aimed at target identification and allocation.
Experimental units like 888 “Refaim” and the larger 99 Division “Bazak” are actively testing AI-driven combat technologies, including unmanned aerial vehicles, to augment the IDF’s combat prowess.
Dangers of relying on AI
However, the proliferation of AI-powered capabilities also raises critical concerns.
These include ethical dilemmas, potential biases and the intricate challenges of assessing the legality and accountability of AI-driven military operations.
Advanced militaries such as the IDF must grapple with such contending legal and ethical implications of using AI in warfare. For instance, AI algorithms learn from data, which may contain biases or inaccuracies, potentially leading to unreliable decisions in military scenarios.
Additionally, intricate AI models can raise questions of explanability such as “why did our system provide this recommendation or take that action?”
Achieving the right balance between human oversight and AI-enabled autonomy is key to preventing unintended consequences and to retain human control over military decision-making. Equally important are dependable algorithms capable of adapting to environmental changes and learning from unforeseen events. Errors made by AI systems can result in severe ramifications on the battlefield.
Furthermore, on a strategic level, the advancement and deployment of sophisticated military AI could spark a race for lethal autonomous weapons technologies, heightening the potential for conflict escalation by favouring machine-driven decisions over human judgment.
However, the weaponisation of algorithmic warfare is poised to advance rapidly due to ongoing breakthroughs in science and technology.
Meanwhile, the international community is in the nascent stages of developing viable AI governance mechanisms in military use, such as the Responsible AI in Military Domain (REAIM) process launched in 2023.
Israel may also feel that adhering to international AI norms could limit its capabilities. That is why, instead of following international AI governance initiatives, the Israeli military focuses on developing internal ethical guidelines to safeguard AI systems.
Singapore’s defence and AI
The long-term strategic implications of the AI revolution in future conflicts requires a re-evaluation of defence policy planning and management, including the direction of weapons development and research and development efforts.
The implications of AI advancements and challenges in warfare also extend beyond traditional powerhouses like Israel, encompassing countries with strategic interests and technological ambitions, such as Singapore.
As a small nation with a strong focus on innovation and technology, Singapore is keenly aware of the transformative potential of AI in defence and security.
In December 2023, Singapore updated its National Artificial Intelligence Strategy (NAIS 2.0), signalling its ambition to become a global leader in the conscientious and innovative utilisation of AI. The strategy serves as a comprehensive roadmap for the entire government.
The core philosophy of AI governance embedded within NAIS 2.0 revolves around several key principles: ethical and responsible AI, transparency, collaboration and inclusivity, and human-centric AI.
These principles shape the way AI is integrated into Singapore’s defence and military innovation efforts.
Indeed, Singapore Mindef’s preliminary guiding principles for AI governance in defence innovation and military use prioritise responsible, safe, reliable and robust AI development.
By harnessing AI systems, cloud technologies and data science, the Singapore Armed Forces’ (SAF) aim is to automate tasks, enhance decision-making processes and optimise capabilities. This approach could see the SAF become more effective in a more volatile and uncertain regional security environment.
Going beyond technology
Singapore’s integration of AI in its defence strategy extends beyond technological advancement; it also underscores the importance of defence diplomacy as a core anchor of its national security approach, emphasising both deterrence and resilience.
Central to these efforts is a growing emphasis on regulating AI development and use, particularly regarding lethal autonomous weapons systems (LAWS), with the aim of establishing responsible and ethical norms for AI in warfare and promoting regional peace and stability.
In 2021, Mindef introduced preliminary guiding principles for AI governance in defence innovation and military use, prioritising responsible, safe, reliable and robust AI development.
These principles also serve as a foundation for advancing responsible AI defence diplomacy, promoting ethical and accountable AI use in military applications.
In this context, Singapore has actively pursued collaborative partnerships with various states, particularly major powers possessing advanced AI technologies such as the US, France and Australia.
In 2023, Singapore also endorsed both the Responsible AI in Military Domain (REAIM) initiative and the US-led “Political Declaration on Responsible Military Use of AI and Autonomy”, demonstrating the need for a multilateral and norms-based approach to AI governance in the military domain on the global stage.
However, within the rapidly evolving defence AI landscape, Singapore faces intricate challenges and dilemmas.
One of the foremost concerns is striking a delicate balance between technological advancements and ethical considerations in AI-driven military competition in East Asia.
As countries in the region invest heavily in AI technologies for defence purposes, the risk of an escalating arms race and the potential for AI-driven conflicts will rise.
Singapore’s defence establishment must remain agile and proactive in leveraging AI for defence purposes while mitigating the risks associated with AI-driven military competition.
Collaboration with like-minded countries and adherence to international norms and standards will be crucial in shaping a responsible and sustainable future for AI in defence in the region.
Follow us on