AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.
Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance
ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks.1 Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.
Using AI to bolster cybersecurity
In Womble Bond Dickinson's 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI's 'ability to detect and respond to cyber threats and the need to secure AI-based application' makes it a powerful tool to defend against cyber-attacks when utilised correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.
Utilising machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilise AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analysing the behaviour of malware, AI can pin-point specific anomalies that standard cybersecurity programmes may overlook. Deep-learning based programme NeuFuzz is considered a highly favourable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.
A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasise the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.
Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or 'address specific types of attack' which, 'makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.' The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.
In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.
Implementing security by design
A security by design approach centres efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a 'silver bullet' to meet all requirements under data protection compliance.
This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.
However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI's capability to analyse huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.
Risks
Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.
Cost benefits
Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviours, which is particularly important where personal data is involved as a company's integrity and confidentiality is at risk.
Moving forward
AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. Whilst AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.
Despite suggestions that AI's reputation is degrading, it is a powerful and evolving tool which could not only improve your business' approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviours and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.
If your business is looking to implement AI tools, or has already started this integration, WBD have a dedicated Digital team who can assist you with putting in place policies and procedures to ensure AI is successfully enrolled into your business practices alongside your current workforce.
___
* While a portion of ENISA's commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.