AI is swiftly altering the global security landscape by changing the way nations plan their defence and has become a Revolution in Military Affairs (RMA), a term coined by Mr Andrew Marshall from the United States.
Such advancements impact the conduct of warfare and necessitate the adaptation of novel doctrines and strategies as a result of wargaming.
AI has become a strategic asset for military operations.
It improves decision-making and enables the development of advanced autonomous systems, providing a strategic advantage in conventional and hybrid warfare.
The race for AI dominance is transforming traditional military alliances and forming new fault lines in international politics.
As seen in recent conflicts like the Russia-Ukraine Conflict, AI-driven intelligence and autonomous systems have been used to transform the battlefield.
Furthermore, Israel uses ‘The Gospel ‘ (an AI-powered system) to identify buildings used by the enemy and the Iron Dome air defence system to detect and intercept incoming missiles.
Azerbaijan used AI-powered drones, such as Bayraktar TB2, for reconnaissance and strikes, and AI-powered artillery was used against the Armenian Army.
Concurrently, these technological advancements pose critical concerns regarding heightened risks and the potential for autonomous systems to operate beyond human control, with self-learning capabilities that even their creators did not anticipate.
This challenges the existing frameworks of accountability under international law, including Article 36 of Additional Protocol I (1977) of the Geneva Conventions, Articles 2(4) and 51 of the UN Charter, the Martens Clause, and the Convention on Certain Conventional Weapons (CCW), as they especially deal with the use of force, accountability, and escalation risks.
To mitigate its challenges, Pakistan is integrating AI into its national defence strategy, recognizing its power, potential, and the threat it poses to security.
The establishment of the National Centre for Artificial Intelligence (NCAI) underscores Pakistan’s commitment to investing in AI research and development.
However, Pakistan currently faces numerous challenges in both conventional and unconventional domains, including a lack of modern research platforms, limited capital resources for AI infrastructure and a shortfall in skilled human capital, all of which pose significant obstacles to the effective integration of AI.
The use of artificial intelligence in conflicts raises serious concerns about international humanitarian law (IHL).
Autonomous weapon systems call into question the principles of distinction (Article 48 of 1977 Additional Protocol (AP) 1), proportionality (Article 51(5)(b) of AP 1) and accountability (Article 86(1) of AP 1).
According to IHL, parties to a conflict must distinguish between combatants and civilians and guarantee that attacks are proportionate to the military advantage gained.
The Geneva Conventions, as well as Additional Protocols I and II (1977), reinforce civilian protection in international and non-international conflicts.
The Martens Clause highlights that the principles of humanity and public conscience are in effect, and the Convention on Certain Conventional Weapons (CCW, 1980) aims to restrict or prohibit the use of weapons that cause unnecessary harm, including ongoing discussions on lethal autonomous weapons systems (LAWS).
Furthermore, Pakistan and many other countries are signatories to multiple International Agreements; therefore, adherence to these legal principles and the judicious use of AI in military applications must be upheld.
This does not eliminate the right of self-defence, under Article 51 of UN Charter and customary right of self-defence under the Caroline Case (1837)
The United Nations Secretary-General, António Guterres, has called for the prohibition of lethal autonomous weapons systems, labelling them as “politically unacceptable and morally repugnant.” Pakistan’s stance on this issue emphasizes the importance of multilateral dialogue and legally enforceable regulations to govern AI’s use in warfare.
As a member of the United Nations, Pakistan has backed initiatives to establish an international treaty prohibiting fully autonomous weapons, in line with broader concerns about the humanitarian consequences associated with AI-driven warfare, demonstrating its commitment to being an ethically responsible nation.
South Asia’s security environment is volatile with two major nuclear powers.
India has made significant investments not only in AI technology but also in AI-driven defence systems (S 400).
India’s Defence Research and Development Organization (DRDO) has been working on AI-powered projects, including autonomous unmanned aerial vehicles (UAVs).
India has also collaborated with Israel to develop compact UAVs for engaging aircraft, showcased Artificial Intelligence in Defence (AIDef) in the AIDef symposium and exhibition in 2022, and has also established the Defence AI Council (DAIC), which has resulted in an increased AI-driven arms race in the region that may cause instability.
Militarising AI in South Asia by India increases the risk of conflict escalation and impedes the security of the region.
Thus, bilateral and multilateral frameworks for AI governance are crucial for preventing these risks.
Pakistan needs to adopt a comprehensive approach to address the challenges posed by AI to national security, which would balance technological advancements with legal and ethical concerns.
Therefore, Pakistan should invest more in AI research and development and strengthen its legal framework in line with international law.
Furthermore, Pakistan should also promote regional dialogue with neighbouring countries to prevent an AI arms race and ensure regional stability and security.
We should develop an advanced AI-enabled cyber capability to protect critical infrastructure and national security assets from AI-driven threats and drone technology for border security and defence operations, as done by Ukraine and India.
Due to the increase in misinformation and AI-driven propaganda manipulating public opinions in war or unrest, Pakistan should strengthen its digital media defensive strategies to combat these AI-generated misinformation campaigns.
—The writer is an International Law expert with a rich experience in negotiation, mediation and Alternate Dispute Resolution.