Security & bias →

Adapting to security and bias challenges

The rise of artificial intelligence comes with new security and bias challenges that few organisations are aware of, or have build protective systems to protect their people, their suppliers and their customers from.

Although we're yet to understand the specific challenges in great depth, organisations should be aware and accountable for the choices they make, in regard to using AI, which in turn, effects security and biases.

Online security
Fair Brand application process

Advanced phishing and social engineering

Cyber attackers are using AI to generate highly convincing, personalised 'phishing' emails  and deepfake audio/video for voice scams, making fraud harder to detect.

Organisations should be responsible from protecting both internal and external stakeholders from these types of AI based threats.

Key statistic (Source: Click here)

%

Surge in phishing attacks since rise of gen AI in late 2022

Worst-case scenario:

Organisations don't understand, or  act, to put protective systems in place, leaving their employees, suppliers and customers highly vulnerable.

Best-case scenario:

Organisations act preemptively, and install systems and processes to protect all stakeholders, removing threats and limiting potential effects.

Prompt injection attacks

Prompt injection attacks are one of the most widely reported weaknesses in LLMs (Large Language Models) -  an AI system that uses large datasets to produce human like text.

Prompt injection attacks are when an attacker creates an input, designed to make the model behave in an unintended way. This could involve causing it to generate offensive content, reveal confidential information, or trigger unintended consequences in a system that accepts unchecked input.

Key statistic (Source: Click here)

%

Organisations adopting agentic frameworks to boost productivity, impacted by risky prompts

Worst-case scenario:

Organisations don't understand, or  act, to put protective systems in place, leaving their employees, suppliers and customers highly vulnerable.

Best-case scenario:

Organisations act preemptively, and install systems and processes to protect all stakeholders, removing threats and limiting potential effects.

Data poisoning

Data poisoning in AI is a cyberattack where malicious actors influence and corrupt an AI model's training datasets. This can trick it into learning incorrect patterns, exhibiting bias, or creating hidden vulnerabilities (backdoors) that can be exploited later.

Organisations using AI systems that are derived from LLMs (Large Language Models) need to understand the risks, and be accountable for their decisions.

Key statistic (Source: Click here)

%

AI model accuracy (reduced by data poisoning)

Worst-case scenario:

Datasets become influenced by attackers, with malicious intentions. Real-world applications (such as medical diagnosis or autonomous vehicles) become highly untrustworthy and potentially dangerous.

Best-case scenario:

AI system developers adopt a "defense-in-depth" approach that secures the entire data lifecycle, from ingestion to training. This would include rigourous validation and potentially restructing the training process.

Data biases

AI systems tend to take on human biases and amplify them, causing organisations who use that AI, to become increasingly biased themselves, which, consequently, create a feedback loop.

As developers of AI systems and as organisations that use them, we must ensure the training data is diverse and representative. It is important that the algorithm is regularly audited and updated to address biases.

Key statistic (Source: Click here)

%

Up to 38.6% of AI data was biased, depending on database

Worst-case scenario:

AI system algorithms are unaudited and unregulated, which leads to an increasing bias feedback loop. 

Best-case scenario:

Policy is put in place to control AI systems, and for the developers to be held accountable. This would lead to structured auditing and adjustments to the AI algorithm to reduce biases.

Summary

As organisation directors, managers, board members and executives, it is important to protect stakeholders from the various security and bias threats.

The National Cyber Security Centre has a range of articles and tools available, that can help organisations limit these, such as;

- Free cyber governance training
- Free cyber toolkit