Storagepipe Is Now Thrive

GridWay Is Now Thrive

Thrive UK

AI in Business: Efficiency in Exchange for Risks?

AI in Business: Efficiency in Exchange for Risks?

Artificial Intelligence (AI) has captured everyone’s imagination, and many businesses are now exploring the potential for AI tools to reduce costs and create efficiencies. Over 3,000 AI enterprises and 129,000 people are employed in the UK. AI is projected to contribute over £42 billion to the economy through various applications such as machine learning programs, data analysis, sensor and signal processing, and automation.

However, it is still in its early phases, and using a new technology comes with unanticipated hazards.

This blog intends to inform business owners of the newly understood risks and answer the question: Can AI tools, such as ChatGPT, be used safely in the corporate environment, or should they be cautiously tested alongside scrutiny of the data privacy implications?

Uncontrolled growth

HSBC surveyed over 500 UK businesses and found that over a third are planning to use AI to generate business efficiency and replace staff. According to research by Startups Magazine, over 60% of SMEs are considering the same. According to a survey by The Times, British businesses are more wary of using AI than companies in the US. Organisations such as the British Retail Consortium recognise the business potential but warn against the risks of mindlessly using generative AI tools, such as ChatGPT and others.

Reaching 100 million active monthly users in its first two months, ChatGPT is widely regarded as a miracle software – but it’s far from it. The Law Society investigated reports of AI creating false reference links to material that did not exist but had supposedly been published by a major UK newspaper.

Notoriously referred to as “hallucinations,” AI chatbot software is getting a reputation for being, at best, imprecise and, at worst, untrustworthy, yet this is increasingly used in front-line B2C services as the link between customers and business.

“Black Box”

Essentially, these tools are a black box – ingesting user data without checking what is being collected, how it’s being used to formulate a response, and where it goes afterward. This has significant privacy implications when employees input sensitive corporate information.

Taking Samsung as an example, employees using ChatGPT to fix a coding issue led to an accidental data leak this year, prompting a blanket ban on generative AI tools due to intellectual property risk. This is no surprise, as ChatGPT’s privacy policy states that they “may provide your personal information to third parties without further notice to you unless required by the law.”

They also note that this includes “vendors.” The best way for CISOs to secure corporate information security is to discuss and work closely with data scientists. However, to prevent such readily preventable incidents, staff must review the privacy policies of each AI tool before use.

Lack of legal protection

In 2022, the UK Parliament began a review into the legal protection in place relating to the use of AI, which concluded in a March 2023 white paper that AI protection has significant gaps and currently relies on existing legal frameworks such as financial services regulation, without properly purposed or intended consequences. The implications for businesses present unexplored legal territory.

Recently, businesses have been surveyed to understand their awareness of these risks. Surprisingly, in the 2023 KPMG Generative AI Survey, out of 225 polled executives, 68% had not appointed any team or person to respond to the generative AI phenomenon, leaving it to the IT department in general – impeding employees from having specific guidance in the face of data risk.

60% believed they were one or two years away from doing this. But while executives mull over implementing appropriate generative AI solutions, employee use is increasing. In a recent survey by Fishbowl, 43% of 11,793 respondents admitted regularly using AI tools like ChatGPT for work tasks, 70% of which do so without the boss knowing.

Hidden bias and secret profiling

Even when used securely, AI has proven to act with extreme bias based on the information it gathers from the world around it. Some high-profile examples are Amazon’s gender-biased recruitment bot preferring men to women and police facial recognition software proven to be completely inaccurate when recognising darker skin tones – leading to the London Met stopping and searching many innocent black school children after being flagged by the AI software.

Without understanding if any of this exists in a business operation, this can incur severe consequences for your business, ranging from flat-out errors to devastating racial or gender bias. In the current AI climate, bias is unfortunately inevitable, and combating it is an ongoing battle for developers. Until a solution is concocted, companies must continuously vet AI output to ensure no unethical results have unwittingly been produced that could harm your business.

Hackers using AI

Data leaks and bias are not the only dangers AI presents. For several years, hackers have constructed increasingly personalised spear-phishing attacks that have become nearly impossible for employees to clock. 95% of business network attacks result from successful spear phishing – armed with highly specific emails that mimic usual correspondence between superiors and co-workers to gain trust.

What is the worst thing about spear phishing? Its effectiveness – it has a 40x greater return rate than regular phishing. The best thing? Its difficulty – 77% target just ten inboxes, and 33% just one. But the latter’s about to change, thanks to AI tools like ChatGPT.

Hackers are now using AI to quickly gather information using algorithms similar to those used in ad targeting to leverage data and give the victim a sense of urgency.

Additionally, they use the AI’s demographic and acquired personal data to predict the best targets. To top it off, the AI-powered personalised language can make emails (or even calls using Deepfake audio) sound exactly like they’re from a boss, friend, bank, co-worker, or anyone else – with these tools allowing hackers to learn and exploit work relationships.

So, how can your employees protect themselves against data leaks and similar risks?

Companies should set clear guidelines for responsible AI use, developing processes for ensuring the quality of AI output, guaranteeing safety, and reporting any concerns.

The Managing Director and Chief AI officer of Boston Consulting Group finds that “leadership should explicitly communicate what information should or should not be provided to the AI model” to avoid data getting compromised – and stresses that lacking a clear plan of action can potentially harm profitability and tarnish the reputation of a business.

How can you protect yourself and your employees from AI-enabled spear-phishing?

Employee education is crucial. Verifying unusual requests is essential to defending against spear-phishing due to the recent appearance of these assaults and the accuracy of the language employed. Most people wouldn’t question an email or phone call from a trusted co-worker or boss. Additionally, if employee error does occur, creating barriers with anti-malware software and implementing multi-factor authentication or, even better, contextual authentication can serve as a secondary line of defence.

Undoubtedly, AI has increased business efficiency and convenience to previously unheard-of levels. However, the abundance of new ethical issues, hazards to data security, and inaccuracies underline the need for businesses to take a cautious and responsible approach.

Partner with Thrive

Thrive is highly experienced in supporting small to medium-sized businesses in countering the latest threats, and we can work with you to ensure that your employees and business protocols are as resistant as possible to these emerging risks.

Contact Thrive today to discuss how we can help protect your business.