Thrive UK
AI-generated Cyber-attacks: A New Emerging Threat
As AI technology continues to advance at an unprecedented rate, UK businesses face a new and formidable challenge in cybersecurity. A new wave of threats has arisen, posing substantial risks to companies of all sizes. In this article, we’ll explore the emerging AI-generated threats, their devastating impact, and how they mainly affect companies like yours.
What does the NCSC have to say?
In its January 2024 assessment, the NCSC stated that AI will almost certainly impact cyber-attacks, and here’s how. The organisation shows that, in the near term, AI will mainly provide malicious actors with the capability to scale up their social engineering tactics, communicating directly with victims to manipulate them into handing over details or funds. This includes creating “lure documents” without the grammatical translation faults that often ring alarm bells in the victim. They also state this will likely increase over the next two years as models become popular.
AI’s capacity for rapid data summation will also enable cybercriminals to identify businesses’ high-yield assets, which will likely enhance the impact of their crimes. According to this report, hackers (including ransomware) have already been using AI to increase the efficiency and impact of their attacks. Attackers can go deeper into networks with the help of AI-enhanced lateral movement, assisting with malware and exploit development.
However, for the next 12 months or so, human expertise will continue to be needed in these areas, meaning that any small uptake in this threat will be limited to very skilled hackers. Beyond this, experts envisage that malware will even be AI-generated to circumvent current security filters in place. It’s also very realistic that highly capable State Actors have repositories substantial enough to train an AI model for this.
As we enter 2025, large language models (LLMs) and GenAI will make it extremely difficult for any businessperson, regardless of your cybersecurity understanding, to spot spoofs, phishing, or social engineering attempts. We can already tell from this report that the time between security updates being released and hackers exploiting unpatched software is steadily decreasing. The NCSC warns that these changes will “highly likely intensify UK cyber resilience challenges in the near term for the UK government and the private sector.”
Potentially catastrophic results
Time and again, we see how more sophisticated attacks are storming even Britain’s most protected infrastructures. Just last year, as previously reported, hackers accessed sensitive UK military and defence information and published it on the dark web. Thousands of pages of sensitive details regarding max-security prisons, Clyde submarine base, Porton Down chemical weapons lab, GCHQ listening posts and military site keys were revealed to criminals, gravely compromising critical infrastructure.
In the same period, we saw cyber-criminals strike the NHS, revealing details of more than a million patients across 200 hospitals, including NHS numbers, parts of postcodes, records of primary trauma patients and terror attack victims across the country. The actors responsible are still unknown despite extensive specialist analysis. This is similar to the previous year’s attack, leaving the NHS with a devastating software outage, impairing NHS 111, community hospitals, a dozen mental health trusts, and out-of-hours GP services. This incurred considerable safety risks for the British public,
such as incorrect prescriptions and the inability of mentally unwell patients to be correctly and professionally assessed.
In January this year, the UK government released a policy paper introducing the “AI Safety Institute” concept. This paper mentions AI being misused in sophisticated cyber-attacks, generating misinformation and helping to develop chemical weapons. It also mentions experts being concerned with the possibility of losing control of advanced systems, with potentially “catastrophic and permanent consequences.”
AI development out of control
It also admits that “At present, our ability to develop powerful systems outpaces our ability to make them safe.”, adding to already existing concern for the safety of AI. While it pledges to develop and conduct evaluations on AI systems to minimise existing harms caused by current systems, this still needs to take away from the need to be vigilant regarding this ever-evolving new technology. Another government paper, “Safety and Security Risks of Generative Artificial Intelligence to 2025,”lists the most significant AI risks for 2025 are cyber-attacks (more effective and more substantial scale as previously mentioned, using enhanced phishing and malware); increased digital vulnerabilities as GenAI integrates into the critical infrastructure and brings forth the possibility of corrupting training data or ‘data poisoning’; and erosion of trust in information as GenAI can create hyper-realistic bots and synthetic media or ‘deep fakes.’ The government assesses that by 2026, synthetic media could make up a substantial portion of content online and risks eroding public trust in media outlets and governments. This issue needs to be solved by any means.
How UK businesses are affected
For a business, the uncontrolled development and use of AI systems raise concerns about access security to company systems, data integrity and protection of IP, patents and brand image. Medium-sized SMEs often operate with tighter budgets and leaner IT teams, making it a challenge to invest in comprehensive cyber solutions or know where to start. According to the NCSC, “SMEs are often less resilient to cyber-attacks due to a lack of resources, skills and knowledge.”
Cyber-criminals are wise to this and target businesses of this size with tailored attacks such as AI-enhanced phishing correspondence. In fact, according to the 2024 Sophos Threat Report, over 75% of customer incidents handled were for small businesses. Data collected from SME business protection software indicates that SMEs are targeted (mostly with malware) daily.
Fortunately, hackers’ use of AI is still at an early stage and is bound to become increasingly sophisticated as it continues to develop at its current rapid speed. There is still time to protect you and your business, and the Thrive team is highly experienced in guiding and supporting SME businesses every step of the way. Contact us today.