The rapid development of technological capabilities, particularly in robotics, automation, and artificial intelligence, presents complex new challenges for law enforcement agencies that will intensify in coming years. Criminal elements can already hire experts to program and operate autonomous systems for illegal activities, necessitating innovative approaches to traditional prevention, investigation, and enforcement methods.
The examples are numerous and concerning: robots capable of cracking safes through precise study of locking mechanisms, AI systems that can forge executives’ voices to authorize financial transfers, and even tiny robots that can infiltrate secure facilities through ventilation shafts to disrupt security systems from within. One particularly worrying scenario involves networks of distributed, encrypted robots executing complex crimes. For instance, a swarm of pre-programmed robots coordinating a break-in at a logistics/commercial warehouse, with each robot performing a specific task – one neutralizing security systems, another handling door mechanisms, and a third operating logistics systems. This was demonstrated in the “Robot Hijacking” incident in Shanghai this August, reported two weeks ago, where a small AI-equipped robot initiated a “conversation” with other display hall robots and “persuaded” them to leave the room together.
Key challenges include identifying the “perpetrator” when a robot or AI system commits a criminal act. Complex operational and legal questions arise: Should the system programmer, manufacturer, operator, or owner face charges? Defining criminal liability in such cases requires legal thinking adapted to the new technological reality and legislative adjustments.
Unlike traditional crime, these systems can operate simultaneously across multiple locations using encrypted, secure communication that’s difficult to detect and decode. Moreover, their ability to learn and adapt allows them to modify behavior patterns and evade detection. Digital evidence collection poses another significant challenge, as sophisticated AI systems can inherently erase digital traces, falsify data, and complicate incident reconstruction. There are already reported cases of AI systems creating perfect alibis for criminals by forging security footage or recordings, using digital spoofing techniques to mask criminal activity sources.
One particularly dangerous tool is AI systems’ ability to perform sophisticated psychological manipulation. They can analyze human behavior patterns, identify vulnerabilities, and exploit them for fraud or extortion. Furthermore, deepfakes and similar technologies enable the creation of convincing fake content that’s difficult to distinguish from reality. There are already known cases of digital communications being forged, both written and voice messages, containing instructions for significant financial transfers that later proved fraudulent.
Even scenarios currently considered simple, like stopping vehicles for license checks or passenger and cargo identification, become complex with autonomous vehicles programmed for specific missions without drivers. How do you communicate with such vehicles? How do you safely stop them for inspection or neutralize criminal activity?
To address these challenges, law enforcement agencies must now analyze expected threat ranges, develop new capabilities including specialized units focused on robotic and AI-based crime, train investigators in advanced technologies, develop dedicated forensic tools for advanced digital evidence analysis, and more.
Closer cooperation is needed between law enforcement and cybersecurity experts, AI researchers, and robot manufacturers.
Understanding that criminal organizations will repurpose commercial technological tools for criminal needs requires updated regulations binding technology manufacturers, preventing misuse of their products. Developing effective strategies to counter new threats will only be possible through extensive integration of technological, legal, and operational knowledge, alongside international cooperation between government entities, enforcement agencies, and industry players.
(Major General (Ret.) Boaz Gilad is a former senior official in the Israel Security Agency and the Israel National Police. He is currently the CEO of S.T. – Impact and a researcher at the Institute for Personal Security and Community Resilience at Western Galilee Academic College)