Happy New Year!

Munich Security Conference and Binding Hook are running an AI-Cybersecurity essay competition–Unfortunately I missed the deadline (2nd Jan 2025), but I decided to finish off my thoughts and post them here instead.

How will Artificial Intelligence change cybersecurity, and what are the implications for Europe? Discuss potential strategies that policymakers can adopt to navigate these changes.

How will Artificial Intelligence change cybersecurity?

Exponential advances in computational scale and the emergence of powerful large language models (LLMs) are precipitating a paradigm shift in cybersecurity. Whilst pre-LLM AI was confined to narrow supervised learning tasks such as malware detection, today’s AI models have capabilities rapidly approaching the level of skilled humans in several key areas. AI is set to transform the cybersecurity landscape, presenting both opportunities for more effective defence and evolving risks to Europe’s security. The net impact of these changes–positive or negative–will rest heavily on the policies put in place to manage them.

Most cyber attacks are based upon well-known techniques exploiting widely recognised weaknesses [1]. These are all documented in great detail online and therefore available for AI models to learn from and reason about. Publicly available LLMs (e.g., ChatGPT) already show the potential for automated social engineering attacks, offensive cyber operations, software vulnerability detection, and exploit generation [2]. ENISA report that AI is now being used by hostile state-backed groups for vulnerability research, phishing assistance, and target reconnaissance [3]. No longer bottlenecked by the lack of skilled human resources, AI is making cybercrime cheaper and more accessible than ever before. We must anticipate that our people, systems, and infrastructure will be exploited at significantly increasing pace and scale if we do not move to defend ourselves.

Beyond known techniques and mainstream LLMs, AI is already capable of discovering novel real-world software vulnerabilities which have evaded conventional approaches to finding bugs [4, 5, 6], LLMs specifically for cybercrime can be bought online [7], and convincing deepfake video has been used to steal millions from a European business [8]. Deep reinforcement learning (DRL) based methods will also benefit from computational scale and continue to become more impactful, for example already outperforming state-of-the-art conventional methods in leaking information from real-world microprocessors [9].

We are witnessing only the very beginning of AI’s impact on cybersecurity and already the cost is severe. As AI continues to advance we should brace for even more radical changes. AI will accelerate the pace with which adversaries can discover and exploit new vulnerabilities, before they are patched; in the worst-case, fully autonomous cyber attackers discovering novel vulnerabilities and exploiting them at scale. Such a system might operate without human oversight, indiscriminately destroying assets or extorting organisations on behalf of its creator.

What are the implications for Europe?

If we do nothing, attackers will continue to wreak havoc on our digital systems with increasing scale and speed, adding to losses already exceeding trillions of {$/£/€} per year. Increasingly, critical national infrastructure (CNI) is at risk and suffering exploitation by financially or politically motivated attackers [10, 11]. It has become hard to excuse the claim that any given system or component is secure, and instead resilience in the face of eventual compromise is being widely called for [12]. Deepfakes of ever-improving realism threaten to undermine democratic process and degrade public trust in information.

Potential strategies that policymakers can adopt to navigate these changes

There isn’t a quick or one-size-fits-all solution to cybersecurity, and risks from AI are no exception, but we can act to minimise the risks and maximise the opportunities. I think the following items go some way towards this goal.

Get the incentives right

At least as importantly as technical solutions, better incentivising good cyber hygiene (e.g., ensuring latest vendor patches are always applied and enforcing multi-factor authentication on all accounts) will have a large and relatively quick impact. NCSC’s Cyber Essentials [13], launched 2014, is a step in the right direction; but we need to go further, especially to combat expanding risks from AI. A 2024 report found that only 12% of businesses were aware of Cyber Essentials [14]. Getting the incentives right is highly challenging, for example balancing higher costs to businesses with a competitive market (i.e., ensuring small businesses are not excluded by regulation that prohibits entry in practice). There is a nice guide to cyber security economics by Tyler Moore here. I think __requiring (e.g., by regulation and financial penalty for non-compliance) multi-factor authentication on all accounts, particularly on public sector and CNI systems, would be a good starting point.

Better measurement

I don’t think we do an adequate job of measuring either the failure risks of our systems or the effectiveness of our cybersecurity measures. Our goals should be SMART, how else can we know if we’re making progress? A recent report on cyber-physical resilience strategy from the U.S. President’s executive office calls for exactly this, with a particular focus on cascading risks and the need for bounded failure measures [12]. I think a good first step would be to require organisations (perhaps irrespective of size but especially CNI) to maintain an inventory of every system they are responsible for–perhaps this should be independently audited every year similarly to financial accounts, but at least there should be penalties should it be discovered that assets not on the inventory were later involved in a breach. The next step would be for each of these assets to have a risk register and a corresponding mitigation strategy. This sounds like a large overhead, which I am wary of, but perhaps the mapping of assets and risk identification could be automated to some extent by AI.

Research and Training

Of course, I’m biased… but we need to fund researchers and businesses putting AI “to good use” for improving cyber security. Defenders are often left lagging behind, waiting for weaknesses to become known before rushing to try and fix things. I think we should begin yesterday building algorithms and tools which use state-of-the art AI to secure real-world networks and systems. There are some initiatives aiming for exactly this including the UK’s ARCD (autonomous resilient cyber defence) programme, DARPA’s CASTLE (cyber agents for security testing and learning environments), and DARPA’s AIxCC challenge. However, there remains a large scope for improvement–I’d like to see more challenges and competitions that use real-world software and hardware. AIxCC gets this right but elsewhere there has been a proliferation of pseudo-cyber environments with little study of how they might eventually relate to real-world security improvements. In addition, ARCD is coming to the end of its funding period this year. I think it would be a shame for UK/European efforts to stall whilst CASTLE and others press ahead.