Is generative AI a threat or the solution for cities facing cyberattacks?

Generative AI puts new tools in the hands of cybercriminals, who are increasingly targeting cities amid high geopolitical tensions. But it can also be used as a cyber weapon against such attacks, such as identifying anomalous behavior in networks to filtering malicious emails.

Cyberattacks have been on the rise for the past several years, with cities right in the firing line. These attacks may directly target city halls, essential services such as hospitals or even a city’s energy supply, and today they pose a lingering threat like a sword of Damocles over local councils and companies which manage public infrastructure.

Structural factors lead to increased urban vulnerability. In particular, with smart cities becoming increasingly popular, urban infrastructure is increasingly digitalized and therefore vulnerable to attacks:

Cities are becoming more tech-driven and in turn more vulnerable. Their infrastructure is dependent on several layers of software, so attacking a hospital or a transport or energy system is very tempting for hackers – be they terrorists or criminals motivated by greed,” says Antoine Picon, professor at the Techniques, Territories and Societies Laboratory, whose research focuses on the links between digital technology and cities.

Circumstantial factors also increase the level of threat. The rise in geopolitical tensions between nations (the war in Ukraine, growing tension between the US and China, etc.) is leading to an increase in cyberattacks on infrastructure. Countries are using hackers to wage war indirectly, and turn a blind eye to cybercriminal activities carried out in their country for profit. Ransomware attacks targeting cities are particularly on the rise, paralyzing public services and impacting power grids.


Is generative AI an armed wing for cybercriminals?

A cybercriminal’s task has been made a whole lot easier by the dramatic advances in generative AI, a powerful technology capable of creating content itself, which came to the  general public’s attention with the launch of ChatGPT. Indeed, 98% of cyberattacks rely on social engineering and 91% come from emails. In the past, these attacks could be easily uncovered as the malicious emails were riddled with poor spelling and grammar, as the attackers often didn’t write in their native language.

However, generative AI now enables attackers to write faultlessly and fluently at the click of a button. This has, for example, led to a massive increase in cyberattacks in Japan, as many hackers are suddenly able to write convincing phishing emails in Japanese. During the first half of 2023, 2,322 unauthorized online money transfers were recorded – that’s 16 times more than in the same period the previous year.

Generative AI can also be used to partially automates writing lines of code. “Programs like ChatGPT are very good at creating software. It’s rarely perfect, but usually all it takes is a small amount of work for coders to take this written code and turn it into a working program. It’s much less arduous than coding everything yourself from scratch. It also allows experienced developers to program much faster. As such, we can expect to see an increase in the frequency and effectiveness of cyberattacks,” says Zachary Chase Lipton, researcher at Carnegie Mellon University, specializing in artificial intelligence.

Finally, deepfake attacks (fake audio and video content) are also resulting in more impressive scam attempts. Earlier this year, British engineering consultancy firm Arup lost £20 million after an employee fell victim to a cybercriminal who used a deepfake video to pose as a company executive, inciting a fraudulent money transfer. While the victim here was a business, cities are not immune to attack. Hostile powers have recently used AI to launch cyberattacks against cities and infrastructure in the US, the UK and the EU


How cities can use AI against their enemies

However, breakthroughs in artificial intelligence do not only pose as an increased risk to local authorities – they also act as weapons to sharpen their defenses against cyberattacks.

Thanks to its ability to process vast amounts of data in record time and establish correlations invisible to the human eye, AI makes it easier to spot network anomalies that could indicate an intrusion. If a user suddenly starts copying and downloading a large number of files, this could be a sign of a ransomware attack. This is what companies like the British-based Darktrace offers, which uses AI to learn what constitutes as a “normal” day on a business network, so as to detect and take action immediately against any anomaly. The city of Las Vegas notably uses the company’s services to protect itself against hackers. 

The overwhelming majority of cyberattacks happen via email. However, artificial intelligence can also be used to filter malicious email attempts, be it phishing scams based on social engineering or messages containing a fraudulent link or attachment. When trained on a large database, AI can learn to recognize these emails and even eliminate them before they’re delivered to a user’s inbox, therefore completely eliminating the risk of a user clicking on them. This is a service offered by the US company Proofpoint, which specializes in securing mailboxes using AI. The company performs semantic analysis to automatically detect the malicious actor’s message intent, independently of the words or language used. The city of Melville in Australia uses this services to safeguard its civil servants’ email inboxes.


The EU-backed Impetus project, which aims to protect smart cities against cyberattacks, also harnesses the power of AI. “We have created eight concrete tools along with a guide on how to use them to improve urban security, ensuring that the tools themselves comply with all standards, including the GDPR. One of these tools includes using AI to analyze all network flows to detect attacks, as well as to identify attempts to destabilize society via social media,” says Nesrine Kaaniche, researcher at Télécom SudParis, who worked on the project.

However, to avoid the black box effect, it is important that AI is explainable so that cities can truly use it to their advantage. This is something that Impetus also focusses on. “For each tool, we are able to trace how it acted and why, so that humans can always comprehend everything and act accordingly.

Thanks to these different tools, cities have the resources to better resist repeated attacks by hackers – provided that they also train their staff in the basic principles of cybersecurity.

Share this article on