Is there ethical bias within AI machine learning?

Is there ethical bias within AI machine learning?

Ethics within AI is undoubtedly a growing area of debate. The Guardian recently reported that Google is to make changes to its research process following the dismissal (and the resulting backlash from staff) of two of their leading ethics specialists for raising concerns on racial inequality within the google algorithm. The story of Margaret Mitchell and Timnit Gebru has struck a chord with many, highlighting the growing concern for potential ethical issues within AI outputs.

It’s a fascinating topic and one we’ve often considered in the development of our own deep learning AI platform.

Ethical biases within statistical modeling machine-learning

It’s widely known that traditional machine learning based on statistical modeling will inherently be biased in one way or another. This is because humans shape the very foundations of this AI technology. Unfortunately, we bring all kinds of biases (past experiences that influence our perceptions and reactions to information) when we shape our AI platform’s requirements. And these can unintentionally translate into ethical issues through lack of inclusion or consideration at the point of development.

A well-known example of this is the Amazon recruiting tool. It was supposed to pre-filter CVs so recruiters could quickly shortlist the most promising candidates. However, it turned out to be great at filtering out CVs from female candidates for engineering roles. Unfortunately, the clever AI-powered solution suffered gender bias because predominantly male engineering CVs had been used as examples during its ‘training’ process. It considered being female as a negative factor in the recruitment process.

And this is just one example among many that exist. Carry out a simple online search, and you will uncover a whole host of articles offering advice on overcoming bias in machine learning, demonstrating just how widespread an issue this is.

Can deep-learning AI overcome ethical imbalances in data?

Because of its lack of human intervention, deep-learning machine learning does not carry the same expectation of bias as more traditional forms of AI. Using neural networks that replicate human-like behaviors, such as rationalization and problem-solving, these more sophisticated AI systems use data to learn and adapt, ensuring their output is purely fact-based and lacking in human influence. Therefore, its results are less prone to ethical concerns – but it can happen, and here’s why.

Take, for example, the subject of fair ethnic representation for a product or service powered by AI insights. According to the government’s ethnicity facts and figures website, in the UK alone, 87% of people are White, and just 13% belong to Black, Asian, Mixed, or other ethnic groups. This data is clear and should be applied correctly within the machine’s analysis. However, Zamila Bunglawala, Deputy Director of Strategy and Insight at the Cabinet Office’s Race Disparity Unit, recently revealed that ethnic minority groups are poorly represented in certain industries and sectors. But as yet, we don’t know just how widespread those disparities are. Therefore, a deep-learning AI system would take the data that exists and conclude that these ethnic groups perhaps do not need to be considered in certain industries. It is a simple example but illustrates how ethnic inequalities might arise in AI results and lead to ethical concerns.

How can ethical inequality in AI insights be overcome?

There is no quick fix for this issue. However, knowledge is power. Using deep-learning AI as a first step will reduce the likelihood of biases that can cause inaccurate insights and result in ethical issues.

Being aware of the potential for imbalances in AI output is key to accounting for possible ethical issues within your insights.  Ensuring your AI system is provided with the information it needs to identify and understand potential areas of bias from the outset will help ensure its output reflects reality as closely as possible.

These incredibly intelligent AI machines can reveal insights from data that no human could ever achieve. They are already transforming the way we carry out medical research, helping develop self-driving vehicles, and uncovering operational efficiencies never considered before. Their analytical abilities far outperform ours. However, a little additional human logic can help ensure that their data interpretation is on the right track.

About Cloudapps deep learning AI

Cloudapps is the first and only provider of ‘Deep Learning’ powered AI in the CRM sector.

It offers a powerful sales development tool that can not only improve sales effectiveness in the here and now, but also continuously learn as the market develops – enabling organisations to futureproof their sales and CRM activities for whatever lies ahead.

The Cloudapps AI engine is unique in its ability to generate forecasts that are over 95% accurate. This accuracy comes from;

Learning from rich behavioural data: It generates a rich data audit trail for every deal based on high-value sales behaviours, not simply sales activity, and the more data you feed it, the greater the accuracy.

Uncovering insights from the deal journey:  The data picture it builds is time-sequenced, recording not just which sales behaviour happened but crucially when.

Using the latest innovation: Not all AI is as smart. The Cloudapps AI engine uses the very latest ‘Deep Learning’ algorithms that significantly outperform traditional AI.

Cloudapps Deep Learning AI engine powers our current customers to achieve results that include:

  • 95% forecast accuracy
  • 60% more selling time
  • 20% increase in win rate

If you would like to understand how our technology could enhance the work of your data scientist and bring new insights to your sales function, contact us today.