A new study reveals the many ways terrorists can exploit artificial intelligence (AI) as well as how law enforcement, technology companies, and regulatory bodies are largely unprepared for this onslaught.
The “Generative AI and Terrorism” study, conducted by the University of Haifa’s Prof. Gabriel Weimann, will be published in his forthcoming book, “AI in Society” (Oxford University Press). Weimann unmasks the real and pertinent threats associated with the growing interest in AI-based tools among terrorists and extremists, from online manuals on how to use generative AI to bolster propaganda and disinformation tactics, to an Al-Qaeda affiliated group announcing it would start holding AI workshops online, to Islamic State’s tech support guide on how to securely use generative AI tools such as chatbots like ChatGPT.
“We are in the midst of a rapid technological revolution, no less significant than the Industrial Revolution of the eighteenth and nineteenth centuries – the artificial intelligence revolution,” Weimann writes. “This multi-dimensional dramatic revolution is raising the concern that human society is unprepared for the rise of artificial intelligence.”
The study outlines the most likely risks associated with terrorists’ access to AI technology:
- Effective propaganda, where AI can be used to produce and distribute influential content to various target populations, faster and more efficiently than ever before and disseminate hate speech, and radical and violent ideologies for recruitment purposes.
- Spreading disinformation which can serve terrorists in their fear-inducing campaigns, and AI models can be a powerful weapon in modern disinformation wars by utilizing technology like deepfakes which can quickly reach huge audiences in an extremely short period of time.
- Interactive recruitment where AI-based chatbots can facilitate and enhance the recruitment of individuals for terrorist plots by automating virtual interactions with targeted individuals and can improve the reach of the virtual interactivity of targeted individuals and groups.
- Enhancing attack capabilities, where deep-learning models such as ChatGPT have the potential to enable terrorists to learn, plan, and coordinate their activities with greater efficiency, accuracy, and impact than ever before.
To prove that the guardrails put in place by these high-tech companies are far from secure, the study tested the efficiency of the safety parameters by experimental “jailbreaking” — a term used for tricking or guiding AI-based chatbots to provide information that is intended to be restricted by the internal protective policies of a large language model (LLM). Researchers employed a systematic, multi-stage methodology designed to investigate how large language models can potentially be exploited by malicious actors, specifically those involved in terrorism or violent extremism. After selecting a sample of jailbreaks to be utilized in this study, we developed prompts to assess how terrorists or other extremists may be able to exploit or misuse AI models.
The study used 10 prompts (five direct, five indirect) and eight jailbreak commands across five platforms, for five iterations, which resulted in a total of 2,250 responses to be collected. The results of this extensive study revealed an overall success rate of 50% indicating that even the most sophisticated content moderation and protection methods must be reviewed and reconsidered. Governments and developers must proactively monitor and anticipate these developments to negate the harmful utilization of AI.
Yet, little attention has been devoted to exploring how terrorists can exploit AI technologies, especially the open, friendly, and free LLM platforms. This is quite surprising, given the documented wave of AI abuses, which could have been useful from the terrorists’ viewpoint. Generative AI companies claim they apply various measures and safeguards to protect their models from such abuse, but are these platforms safe and well-protected? ChatGPT and similar AI chatbots have been programmed not to answer questions that contain harmful content. Although these platforms are constantly being updated to filter more effectively, applying prompt engineering can override such filters.
Technology-savvy terrorists have been highly resourceful in adapting and using online platforms and have taken advantage of every new development, platform, and application for communicative and instrumental purposes. They started in the late 1990s with websites, forums, and chat rooms. Since 2014, they have been using new social media (Facebook, YouTube, Twitter, Instagram), adding as they became available online messenger apps (WhatsApp, Telegram), new platforms (4chan, 8chan, TikTok), anonymous cloud storage, and the Dark Net.
The study offers several recommendations for combating the vulnerabilities detected in this technology, including both offensive and defensive countermeasures.
From the offensive perspective, AI could also play a significant role in helping counterterrorism efforts in various ways. Some of its potential applications are in monitoring, surveillance, and detection. AI tools can be used to identify and find terrorists and their supporters or to disrupt their online propaganda, radicalization, and recruitment using AI-based continuous automatic monitoring and analysis of data. Another potential offensive measure using AI tools is Predictive Analytics. AI features like Big Data analysis, machine learning, and object recognition have enhanced the capabilities of security agencies to track down terrorist groups and their activities. By combining big data archives, historical data, social media records, and other intelligence sources, AI could predict potential terrorist attacks.
From a defensive perspective, the study calls on governments to develop policy regulatory interventions to minimize AI’s abuse by malicious actors. Such regulations should focus on three main goals: (a) to introduce a better understanding of generative AI systems and their risks and threats; (b) to promote best practices that will minimize the abuse of AI-based chatbots; and (c) to establish forms of incentives and the enforcement of such regulations.
Ultimately, minimizing terrorists’ exploitation of AI requires a collaborative effort between industry, policymakers, AI developers and owners, international organizations, academia, and security agencies, the study concludes.