Looking into the future: Artificial intelligence and its impact on peace and security in Africa

Date | 12 June 2024

Tomorrow (13 June) the African Union (AU) Peace and Security Council (PSC) is expected to convene its 1214th virtual session on ‘Artificial intelligence and its impact on peace and security in Africa’.

Following the opening remark of Ambassador Rebecca Otengo, Permanent Representative of the Republic of Uganda and Chairperson of the PSC for June, Bankole Adeoye, Commissioner for Political Affairs, Peace and Security (PAPS), is expected to provide a statement during the session. Dr. Amani Abou-Zeid, the AU Commissioner of Infrastructure and Energy, will also be giving a statement. Furthermore, presentations will be made by Ambassador Abdel Latif Ahmed, Co-Chair, NeTT4Peace; Bernardo Mariano Joaquim Junior, Chief Information and Communications Technology Officer and Assistant Secretary-General for the United Nations Office of Information and Communications Technology (UNOICT); Samson Itodo, from YIAGA Africa; and Dr. Kennedy Javuru, from Greater London Authority.

Artificial Intelligence (AI), the use of computer systems for carrying out tasks that ordinarily require the use of human cognition, planning, or reasoning, is increasingly shaping various areas of the lives of individuals and societies. It is reported that African consumers, educational institutions, governments, and companies are rapidly adopting AI to aid in content creation, improve the delivery of public services, and streamline business processes. While AI systems are diverse, some of the recent advances in AI relate to those involving machine learning – a type of AI system that creates its instructions based on the data on which it is ‘trained’ and uses such instructions to carry out a certain task or generate solutions to a particular situation.

In the realm of peace and security as well, AI can enable more effective conflict analysis and early warning. It can also support peace-making and mediation including by addressing information asymmetry. AI-driven technology can also enable state institutions to enhance their capacity for enforcing law and order and fighting criminality, thereby contributing to the security of citizens. Indeed, AI-driven surveillance and policing platforms are deployed for tracking organized criminal networks and responding to or preventing the activities of terrorist or insurgent groups. AI also contributes to getting real-time information on the activities of warring parties in conflict settings, thereby enabling monitoring and compliance with the rules of war and developing conflict management and resolution strategies tailored to the particular dynamics of specific conflicts. Apart from supporting the monitoring of ceasefires, such technology also contributes to the identification of safe routes for enabling civilians to flee to safer areas and for facilitating humanitarian access and delivery.

Despite these and related other positive aspects, AI also carries negative aspects, some of them of particular concern for Africa. Because AI is a general-purpose technology, it is susceptible to being used for negative ends as well. As a result, there are increasing concerns particularly associated with generative AI linked to disinformation, cybersecurity threats, hate speech targeting women and minorities, and fomenting or inciting violence in times of crises and conflicts. For example, it is reported that deepfakes involving AI-driven voice and image technologies are used to impersonate political figures for propagating false information in the elections in Nigeria and in the ongoing civil war in Sudan. AI technologies could also potentially be used to increase cyber-attack capabilities and to design bioweapons and weapons of mass destruction.

Additionally, on account of the bias in the design and the data used for ‘training’ the AI system, there are also various downsides to AI’s use in education, health, and similar sectors that are detrimental to certain segments of society. Beyond the design and data use bias, some AI systems ‘learn’ during their application based on inputs from the environment in which they operate, thereby making the outcome of the operation of such systems unpredictable. Additionally, in the absence of the transparent and regulated adoption and use of AI technology, the use of AI in data collection and surveillance can be used by governments to suppress dissent and violate the privacy of citizens by other actors for various illicit purposes including identity theft and extortion. For Africa, the fact that much of the design and development of AI is controlled by tech companies domiciled mostly in the US, Europe and China also raises critical questions of the tailoring of its adoption and use and importantly its effective governance.

Perhaps the most worrying aspect of the application of AI concerns the proliferation of AI applications for military uses. The use of AI for military purposes carries serious ethical, international humanitarian law and security implications. According to the ICRC, ‘AI and machine-learning systems could have profound implications for the role of humans in armed conflict, especially concerning: increasing autonomy of weapon systems and other unmanned systems; new forms of cyber and information warfare; and, more broadly, the nature of decision-making.’ While there is no data to suggest the widespread use of AI applications for military purposes, particularly lethal autonomous weapons systems, the increasing use of drones both for combat and reconnaissance could evolve into weapons systems with AI-driven capabilities for autonomous action.

Despite the fact that the use of AI-driven technology is on the rise in Africa and such use carries both opportunities and perils as described above, there is a lack of comprehensive data and analysis on the extent of use of AI applications in the realm of peace, security, and politics broadly. There is also a need for comprehensively establishing the specific governance and regulation issues that the use of AI gives rise to particularly in the African context including most notably in the realm of peace and security. It would therefore be of interest for the PSC to establish the state of affairs in terms of the use of AI driven technology in Africa particularly in the realm of peace and security for its policy engagement to be evidence driven.

The other issue related to tomorrow’s session and of relevance for the PSC is how to build on and ensure a follow-up of its previous relevant engagement on this subject matter. The last time the PSC convened on a subject matter related to AI was during its 1097th session on ‘Emerging Technologies and New Media: Impact on Democratic Governance, Peace and Security in Africa’ held in 2022. A key decision that emanated from the session which may feature in the session on AI is the request from the AU Commission to ‘undertake a comprehensive study on Emerging Technology and New Media: Impact on Democratic Governance, Peace and Security in Africa, and present policy options available for harnessing the advantages and for effectively addressing the security threats associated with technologies and new media in Africa’.

In the light of various policy initiatives relating to AI including within the AU itself, the other issue that would be of interest for the PSC during tomorrow’s session concerns the need for minimum common bases and how to ensure policy coherence. The Executive Council during its 44th Ordinary Session endorsed the Conceptual Framework of the Continental Artificial Intelligence Strategy produced by the Specialized Technical Committee (STC) on Communication and Information Communication Technology. The Executive Council also requested the AU Commission to expedite the development of a continental strategy on artificial intelligence. Since the request by the Executive Council, the Commission has established a working group consisting of member states and Pan-African Organisations working on AI and has begun online consultation with multi-stakeholders in April 2024. At the same time, acting on a 2016 request of the STC on Education, Science and Technology, AU’s AUDA-NEPAD has recently published a white paper on the Regulation and Responsible Adoption of AI in Africa, along with a draft roadmap on AI.

In light of the risks associated with AI, there are growing policy debates and processes both on the continent and internationally. These policy engagements take place both at the individual state levels and multilaterally. As the use of AI by private entities increases, there is a growing need to prioritize and protect data and individuals’ privacy, while ensuring accessibility. In light of the need for guidelines and legal frameworks that promote transparency, accountability, and compliance with human rights in the adoption and use of AI-driven technologies, at least seven African countries in Africa (Benin, Egypt, Ghana, Mauritius, Rwanda, Senegal, and Tunisia) have developed national AI programs. While the adoption of such regimes at the national level is important, the nature of the governance and regulatory challenges posed by AI-driven technologies is beyond the capacity of individual states. For the actions of such states to be effective, it is also crucial to adopt multilateral frameworks. A good example of this is the AI act adopted by the European Union (EU). Apart from the establishment of the UN Secretary-General’s advisory Panel on AI in March 2024, the UN General Assembly passed Resolution A/78/L.49, the first-ever resolution on AI. This resolution recognizes the positive impacts of AI on economic, social, and environmental aspects, particularly in achieving sustainable development goals (SDGs). At the same time, it acknowledges the potential adverse consequences of AI misuse and emphasizes the need to adhere to international rules and regulations. Out of the six African states that co-sponsored this resolution, four of them, namely Sierra Leone, Morocco, Equatorial Guinea, and Djibouti are members of the AU PSC and thus may bring key insights for tomorrow’s session.

At the continental level, as noted above there are various ongoing initiatives at the level of the AU. In pursuing the development of African wide AI governance system, apart from ensuring that there is intra-AU policy coherence, it is of particular significance the development of such a governance system builds on and strengthens relevant existing AU policy instruments including human and peoples’ rights norms and the data protection and cyber security instruments. It is of interest for the PSC during tomorrow’s session to discuss how to leverage the recent commitments made by member states on the protection of data and management of cyber security through the ratification of the AU Malabo Convention in June 2023.

As part of the development of relevant guardrails within the AU on the basis of the foregoing policy processes for the safe and responsible adoption and use of AI consistent with the maintenance of peace and security in Africa, it is of major significance for the PSC that it establishes a way of monitoring the impact of AI on peace and security in Africa. It is worth noting that the UN Security Council also held its first formal meeting on AI only recently on 18 July 2023.

The expected outcome of the session is a communique. It is expected that the PSC will reiterate the need for a strategic approach undergird by the implementation of relevant UN and AU norms on the responsible adoption and use of AI and the UN norms on responsible behaviour in cyberspace as a foundation for the security and sustainability of the digital space in Africa. The PSC may also call on member states to expedite the ratification of the Malabo Convention which provides the foundation for developing a continental AI governance regime. The PSC, drawing on the Malabo Convention, may call for the effective regulation of the collection and use of data in Africa in the adoption and application of AI and request the AU Commission to avail relevant guidance on data protection and transparency in the context of the adoption and use of AI in the continent. Considering the various policy processes relating to AI within the AU, the PSC may urge the need to ensure policy coherence and align such different initiatives into a comprehensive continental policy process that provides a common approach to the governance of AI across different sectors. Taking the conclusions of its 1097th session forward, the PSC may underscore the need for establishing comprehensively the state of adoption and use of AI in Africa, particularly in the realm of peace and security. To this end, the PSC may task the AU Commission to establish a team of African experts in the field of AI and its use in the field of peace and security for collecting and presenting comprehensive data and analysis on the extent of use in Africa of AI applications in the realm of peace and security and politics broadly and for advising on how to address the peace and security implications of AI in Africa. As part of its role in monitoring and responding to threats to peace and security, the PSC may also request that the AU Commission develops a system, as part of the AU Continental Early Warning System (CEWS), for tracking and reporting on risks and threats to peace and security associated with the adoption and use of AI in Africa by both state and non-state actors, to enable the PSC to respond timely and effectively.