When Regulating AI, ASEAN Should Remain Committed to ...
Gatra Priyandita outlines strategies for ASEAN countries to responsibly adopt the use of artificial intelligence in the military and law enforcement.
From Israel’s “The Gospel” target hunting system to its increasingly sophisticated use for mining battlefield data in Ukraine, artificial intelligence (AI) is already deeply incorporated into defence and warfighting.
The international community has responded with a deluge of summits to hash out bilateral, minilateral, and global agreements to combat AI’s increasingly destructive effects, including in areas like lethal autonomous weapons systems and surveillance. As the international community is debating how best to impose normative and legal constraints on military applications of AI, ASEAN member states must not only actively call for action internationally but also work to seriously consider how their growing use of AI does not misuse information and communication technologies (ICTs) and the norms that its member states try hard to advocate.
AI’s Military and Law Enforcement Applications
In February 2024, the ASEAN Digital Ministers’ Meeting launched the ASEAN AI Guidelines, establishing principles that include fairness, privacy, transparency, security, and safety in AI use and development in Southeast Asia. These guidelines were introduced given the rapid increase in the region’s AI adoption. Kearney estimates that AI adoption amongst Southeast Asian firms could have an increase of 10-18 per cent of GDP by 2030, equivalent to nearly US$1 trillion. The promise of AI for governments, industry, and society lies in its possible use to bolster innovation and economic growth and address longstanding problems associated with inefficiency and security.
At the same time, AI offers many promises to the military and law enforcement, from monitoring the maritime domain to improving border screening. For instance, the Singaporean military has already incorporated AI into its defence posture and capabilities. In a 2023 biennial exercise in the United States, Singapore showcased some of its new AI-driven equipment, including autonomous ground vehicles and drones. Singapore is also working to develop autonomous submarines and maritime systems (including some projects with Australia) and has incorporated AI systems to improve soldier training, defence logistics, and intelligence collection. While far behind Singapore in developing its AI capabilities, Indonesia is also looking to integrate AI with its military capabilities, acquiring 13 AI-enabled long-range military radars from Thales and 12 surveillance and reconnaissance drones from Turkish Aerospace.
Elsewhere, the Royal Thai Police explores the possible use of AI to improve its fight against transnational crime, especially by incorporating AI into the Immigration Bureau’s biometric identification system. The Bangkok Metropolitan Administration and the Royal Thai Police are also incorporating AI cameras to detect and identify pickpockets in Bangkok. The Philippine police, too, is looking at AI’s potential utility – recently, the chief of the national police announced that he was looking into creating a five-year development plan on using AI for smart policing strategies.
The region’s interest in “smart cities” is driving potential growth in the use of AI by law enforcement. This is especially true since smart city projects often deploy and integrate surveillance technologies, like sensors and biometric data collection systems, to improve public security.
ASEAN leaders meet for the 4th ASEAN Digital Ministers Meeting in Singapore on 2 February 2024. (Photo by ASEAN Secretariat)For the military and law enforcement, the power of AI lies in its ability to enhance efficiencies, increase operational capabilities, and limit the risks to human personnel. For instance, deploying autonomous undersea drones for reconnaissance missions is far less risky than having conventional submarines with human submariners, especially during wartime when conflict could lead to human casualties.
Yet, these powerful technologies also carry significant risks, such as potential biases in surveillance systems and mass surveillance capabilities. In Southeast Asia, there is already evidence of AI bias, such as flawed speech and image recognition, as well as biased credit risk assessments. These algorithmic biases often lead to unfair outcomes, such as those that disproportionately affect minority ethnic groups due to biased data. These technologies, if left unchecked, could exacerbate existing inequalities and even lead to new forms of oppression.
Furthermore, during wartime, AI’s effects can be even more devastating, as dependence on these technologies can reduce human involvement in data collection, analysis, and decision-making. Not only does this raise questions about the future of accountability but it can also increase the likelihood of mistakes, as the conclusions and assessments by AI are not always perfect.
Thinking About Responsibility
The notion that emerging and digital technologies can be weaponised by malign forces is not new. In response to the growing weaponisation of cyberspace, the international community, through the UN, agreed on a framework for responsible state behaviour in cyberspace. This framework recognises that international law applies to state activities in cyberspace and affirms a set of 11 norms, confidence-building measures and a collective commitment to capacity-building.
ASEAN stands out as the only regional organisation to subscribe in-principle to the 11 UN norms, driven by concerns that the misuse of ICTs by state actors could undermine digital transformation efforts in Southeast Asia. Amidst debate on AI, Southeast Asian governments are advocating legal and normative constraints on the military applications of AI. However, beyond calling for the international community to commit to responsible behaviour, Southeast Asian governments must seriously reflect on how they would ensure that AI is not misused domestically.
First, Southeast Asian governments must consider how AI can be used responsibly within their own security systems. Beyond committing to a set of guidelines and principles that ensure that military and law enforcement officials commit to key principles like privacy, officials must establish clear guidelines on the use of AI. Government officials must work with the security apparatus to make sure that military and law enforcement use of AI would be consistent with national obligations under international law. Specific guidelines should be set out to delineate how human agency must be incorporated and assured in the process of decision-making, particularly when it concerns the use of force.
Second, the use of AI must be accompanied by active oversight from parliamentarians, civil society, and the expert community. Politicians must become more actively engaged in issues concerning AI, particularly those concerning privacy and data protection. The expert community should collaborate closely with the government to develop and articulate indigenous principles of AI. These principles would guide the ethical and effective use of AI technologies by security agencies, ensuring that AI is used in ways that align with national values and public interests while maintaining high standards of accountability and transparency.
Third, each security apparatus must be vigilant about AI systems used against them. The rise of large language models (LLMs) marks a new era in cyber warfare, enabling states and hackers to breach networks more effectively. Regulations must address AI-enabled attacks, especially as military and law enforcement are prime targets. Already, Southeast Asian governments and private entities are some of the most spied on entities in the world, with 15 per cent of all private entities targeted in state-sponsored cyber operations in Southeast Asia in 2020. Given the region’s strategic significance, the threat will only amplify. Reflecting on how AI can be misused against national security, states can improve their capacity to respond more meaningfully in cyberspace. For instance, the Philippines Defence Forces has banned the use of digital applications harnessing AI to generate personal portraits. These kinds of efforts where governments address the risks from AI-enabled threats will become more important in the years ahead.
Finally, regional cooperation is essential to identify and share strategies for preventing the weaponisation of cyberspace and AI. This means initiating governmental discussions on the use of AI in the military and law enforcement domains. Such platforms could be used to share best practices on oversight mechanisms and serve as a confidence-building measure, encouraging member states to be transparent about how AI is being used in the security domain. Fundamentally, such initiatives should also involve experts and civil society to ensure that perspectives beyond that of governments are heard.
Initiating a conversation about responsible AI use in the military and law enforcement will be challenging, especially in a region where sovereignty is paramount. However, for ASEAN member states to truly demonstrate that their commitments to responsible state behaviour in cyberspace, they must demonstrate responsibility in AI usage domestically.
Editor’s Note:ASEANFocus+ articles are timely critical insight pieces published by the ASEAN Studies Centre.
Gatra Priyandita is a senior analyst at the Australian Strategic Policy Institute.