Thrive UK
UK’s AI Ambitions: A Double-Edged Sword?
As the UK strives to establish itself as a global AI superpower, a robust cybersecurity stance is paramount. A recent Mission Critical report by Microsoft revealed that a mere 13% of companies are resilient to cyberattacks. This report is a wake-up call to understand the gravity of the UK’s cybersecurity situation.
Microsoft tested company resilience with members of Goldsmiths University London using a model created by Chris Brauer (Director of Innovation at its Institute of Management Studies). The study revealed that 48% of UK businesses are vulnerable to attacks.
The report also shows that the UK is currently in a position to be the global leader in cybersecurity. Still, it’s missing out on a £52 billion dividend by not using these tactics, cutting the annual cost of cyber-attacks from £87 billion a year.
Paul Kelly, the director of Microsoft UK’s security business group, highlights AI’s potential to bolster cybersecurity. He states, “AI has the power not only to enhance the security of your business and data but also to significantly mitigate the impact of a cyber-attack on your bottom line.” This potential of AI to strengthen security should instil a sense of confidence in UK businesses.
As reported by the NCSC, the main risks with AI are likely to be from two types of attacks. The first is prompt injection attacks, one of the most widely reported weaknesses in large language models (LLMs).
The attack occurs through fabricated instructions inserted by a cyber attacker designed to make the AI model behave unintendedly. This includes revealing confidential information, generating offensive content, or triggering unethical actions in a system that accepts unchecked input.
The second attack the NCSC warns of is a data poisoning attack, which occurs when a criminal tampers with the training data of an AI model to carry out an attack – affecting security and bias.
As LLMs become increasingly familiar with passing data to third-party applications and services, the risks from these attacks will grow, requiring an appropriate response. So, what response does the NCSC recommend?
The Guidelines for Secure AI System Development, published by the NCSC and developed with the US’s Cybersecurity and Infrastructure Security Agency (CISA) and agencies across 17 other nations, advise on how medium-sized businesses can manage AI in a way that ensures everyone reaps its benefits and doesn’t fall victim to its many dangers. Executives must understand the potential impact on their organisation if an AI system’s security is breached, affecting its reliability, accessibility, or data privacy.
Businesses must have a well-prepared response strategy in place for potential cyber incidents. Ensuring compliance with relevant laws and industry standards when managing AI-related data is essential. Three key questions to consider regarding your organisation’s AI safety are:
- How would you respond to a severe security incident involving an AI tool?
- Is everyone involved in AI deployment (including senior executives and board members) familiar enough with AI systems to assess the potential dangers? This understanding is beneficial and crucial in the current cybersecurity landscape.
- What’s the worst-case scenario (regarding reputation and operations) if an AI tool in your company encounters an issue?
Until recently, most cybercriminals needed to carry out attacks themselves, but rapidly evolving access to generative AI enables automatic attack research and execution. This presents a new and growing threat to your business.
One of its primary capabilities is ‘data scraping’ when information from public sources (social media and company websites) is collected and analysed. This approach dangerously invents hyper-personalised, timely and relevant messages that form the basis of phishing attacks and any attack that employs social engineering techniques.
Another notable trait of AI algorithms is that they gather intel and adapt in real time. This has positive outcomes, such as providing more precise information for corporate users. But it’s also a double-edged sword, as it aids cybercriminals enormously in refining the efficacy of their techniques to avoid detection and steal as much data as possible.
AI can swiftly pinpoint high-value individuals within an organisation. These could include members with access to sensitive staff or client data, limited technological expertise, broad or unrestricted system access, or valuable relationships that could be exploited to reach other critical targets.
The latter is expected, with AI-driven social engineering attacks leveraging AI algorithms to manipulate human behaviour to obtain sensitive data, money or high-value items or access to a system, database, or device. These attacks can be highly sophisticated, using AI to develop a persona to communicate effectively with a target in realistic and plausible situations that would leverage contacts, complete with false audio or video, to engage them.
Since two-thirds of security leaders expect offensive AI to be the norm for cyberattacks within a year, let’s look at some examples from close to home.
The British government has declared its intention to fund AI safety research with £8.5m to tackle online threats, including deepfakes. The declaration prompted a dire warning from the NCSC in January 2024, noting that malicious AI will “almost certainly” lead to increased cyber-attack volume and impact over the next two years, particularly those featuring ransomware. 30% of security professionals surveyed in the compliance specialist ISMS’s new research claimed to have experienced a deepfake-related incident in the past year.
Protect your business from evolving AI-driven cyber threats with Thrive’s cutting-edge security solutions. Our expert team, equipped with cutting-edge technology, is experienced in working alongside companies to safeguard data, ensure compliance and keep you ahead of the curve. Don’t wait for an attack to expose vulnerabilities – let’s fortify your defences. Download our AI policy template to get started today.