BLOG: Artificial Intelligence and Unconscious Bias
News and information from the Advent IM team.
Only 6 months ago the Government was urged to release more details about it’s plans to increase the use of AI to risk-score benefits claims. In the summer, The Department of Work and Pensions (DWP) revealed it’s plans to widen the use of Artificial Intelligence to tackle fraud following the uplift in fraudulent claims during the covid-19 crisis, when some in-person checks were suspended.
Fairness in AI systems is one of the UK Governments key principles for AI, set out in the AI Regulation White Paper. Despite promising safeguards were in place, it was announced this week that the department has pulled the use of AI for the purpose of flagging claims. The systems purpose was to analyse historical data to identify higher-risk claims that were then flagged and referred for investigations by officials (or humans).
The announcement from DWP followed concerns raised by campaigners that suggested that any possible bias in the system could lead to unfair payment delays for legitimate claimants.
When asked if the technology could lead to a situation similar to the Horizon scandal, where postmasters were prosecuted for theft using evidence from faulty software, Neil Couling, a senior DWP civil servant: “I really hope not”.
That doesn’t sound promising does it? A question for us to as is, are we ready for AI driven systems? AI has enormous potential to reduce repetitive, manual work but in such important public services, where any form of delay or disruption can have real impact on people’s day-to-day lives, are we ready to trust AI to be in control?
Typically, when these systems discriminate, they often do so against some of society’s most vulnerable people. The fact that the law says that the default position should be to trust computer data makes it even more important to ensure system integrity before deploying them. In October 2023, the government announced that UK organisations could apply for up to £400,000 in government investment funds to help tackle bias and discrimination in AI systems. While there are some tools already available to test potential bias in your systems, these have predominantly been developed in the USA and often fail to align with UK laws and regulations.
Ai could, if configured correctly, identify and reduce the impact of unconscious human bias, but it can also make the problem worse by baking in and deploying these biases at scale in sensitive application areas such as recruitment, healthcare, policing and in this case with the use by AI by DWP to flag potentially fraudulent claims. Underlying data rather than the algorithm itself are most often the main source of the issue says the McKinsey Global Institute. Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities, it continues to say.
“Unconscious (or implicit) bias is a term that describes the associations we hold, outside our conscious awareness and control. Unconscious bias affects everyone” explains the definition of Unconscious Bias by Imperial College London.
Whether it’s saving time, budget or staffing resources, AI can help you do more with less: time-consuming tasks can be handed off to AI. But in a climate of growing cyber-threats, and AI tools presenting a new risk surface, it is essential to prioritise transparency and trust when deploying AI .
In a more amusing AI news story this month, DPD had to apologise after its AI-powered chatbot began to swear at its users and called the company itself ‘useless’. You personally may agree, but as an organisation it is not a good look to have your own chatbot discredit your organisations performance. Following the incidents, the courier company explained that they had recently updated their chatbot to include a ChatGPT-style large language model, allowing it to produce more human-like responses. In this case, the model had been manipulated to give responses outside the parameters DPD had pre-approved. Leading to the next question… are we ready for AI and can we control it?