24 Mar 2022

What Techniques can help to Prevent bias in Algorithms

Pete Wilson, Pegasystems, EMEA Public Sector Industry Architect part of techUK's Emerging Tech in Policing Week. #DigitalPolicing

Many organisations are rightly adopting a “Segment of 1” approach to dealing with customers, citizens, patients, consumers or whatever constituency they deal with and need to apply a hyper-personalised experience to, to drive engaging, efficient and conclusive digital outcomes. 

Naturally that is increasingly using large quantities of data insight from personal interactions with the organisation through to broader 3rd party and demographic data and all points in between. Furthermore, processing that quantity of data to create the insight needed, uses at one end of the spectrum, algorithmic rules, through to full on ML driven AI at the other end, all of which can adopt unintentional bias. 

That’s bad enough for commercial organisations, but for government creates huge issues around trust and is particularly problematic for the UK philosophy of Policing by Consent. This calls for a means to detect unwanted discrimination by using predictive analytics to simulate the likely outcomes of a given strategy, based on the desired thresholds that are set (that can be aligned to relevant policy).  

Such a simulation can uncover when a bias risk reaches unacceptable levels, such as if the targeted constituency skews toward or away from specific demographics, allowing operations teams to pinpoint the offending algorithm and adjust the strategy to help ensure a fair and more balanced outcome for everyone.  

Best practice in Ethical Bias Checking should encompass some important methods, that are not yet common in the industry: - 

Making bias detection simple and easy: Rather than applying checks to individual process strands, which can be time consuming and error prone. It should be possible to simulate an entire engagement strategy at once across all connected channels and associated processes. 

Offering more flexibility in controlling bias: Being able to set acceptable thresholds for an element that could cause bias, such as age, gender, or ethnicity should be adjustable to reflect scenarios where specific outcomes may be justified by policy, effectively having the flexibility to consciously widen or narrow the thresholds. 

Providing continual bias protection: Even as strategies are adjusted and new elements or actions are added, engagement programs should still be able to be screened for bias in outcomes. 

Transparency: Above all, organisations should be able to control the transparency of their artificial intelligence (AI) engagement models. 

Now all of that is not necessarily a panacea for perfect automated algorithms. Many organisations still find the kind of “Next Best Action” function that this speaks to, best positioned as a Human/Machine teaming arrangement. In this regard, it is clear that the future of AI-based decisioning is a combination of AI insights and human supplied ethical considerations. Even when a channel might not feature a “real-time human”, AI can still successfully power the decisioning, when human and machine insights are consciously embedded in the decisioning engine, inside an ethical framework.  

As I’ve already said, transparency is key, through being able to explain exactly why a decision was made, but fundamentally humans need to take responsibility for AI, building on its strengths and recognising and compensating for its weaknesses. The only way for organisations to change the conversation and comfort level with AI is to take control of it, prove its value through responsible applications, and direct its power toward improving outcomes. 

 

Author:

Pete Wilson Pegasystems EMEA Public Sector Industry Architect 

 

Georgie Morgan

Georgie Morgan

Head of Justice and Emergency Services, techUK

Georgie joined techUK as the Justice and Emergency Services (JES) Programme Manager in March 2020, then becoming Head of Programme in January 2022.

Georgie leads techUK's engagement and activity across our blue light and criminal justice services, engaging with industry and stakeholders to unlock innovation, problem solve, future gaze and highlight the vital role technology plays in the delivery of critical public safety and justice services. The JES programme represents suppliers by creating a voice for those who are selling or looking to break into and navigate the blue light and criminal justice markets.

Prior to joining techUK, Georgie spent 4 and a half years managing a Business Crime Reduction Partnership (BCRP) in Westminster. She worked closely with the Metropolitan Police and London borough councils to prevent and reduce the impact of crime on the business community. Her work ranged from the impact of low-level street crime and anti-social behaviour on the borough, to critical incidents and violent crime.

Email:
[email protected]
LinkedIn:
https://www.linkedin.com/in/georgie-henley/

Read lessmore