Explaniable Ai: AI systems are only as good as the data we put into them

Dev Mookerjee, CTO, IBM Watson Solutions, Asia Pacific

Dev Mookerjee, CTO, IBM Watson Solutions, Asia Pacific

The AI market is expected to reach $47 billion in 2020 and studies show that over 80 percent of global enterprises are either already implementing AI technologies or looking at doing so in the near term. Yet, there are two very clear aspects that are proving as showstoppers in a broad scale adoption of AI – the lack of skills and potential liabilities that might be created. The dearth of good data scientists and AI engineers are well discussed. This article provides my point of view on the latter.

The perception of potential liabilities stems from the “black box” approach to AI technologies that makes them impenetrable to questioning and the level of transparency any regulated organisation needs. This also creates perceptions of a lack of trust in the systems and raises questions about their organisational and societal ethical considerations from decisions made by using AI.

Explainable AI

In the book “The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future”, Kevin Kelly states, “A good question is what humans are for”, I would like to extend that to – a good answer is what AI is for. To me, these two statements provide a great perspective on the value of a human and a machine working together. People excel at things like value judgement, common sense and setting goals. Machines excel at large scale mathematical calculations, pattern discovery, statistical reasoning. Together we make better decisions, gain more confident answers and have less negative bias in our decisions.

 But, as humans and machine increasingly work together, we need to ensure we have “Explainable AI” systems in place, where the algorithms used are transparent, or at the very least interpretable. In other words, we need to be able to explain their behaviour in terms that humans can understand  from how they interpreted their input to why they recommended a particular output. That is the only way we can build trust in them, and as a result, the only way we will adopt AI and scale in an ethical manner.

The onus lies on us as business leaders to deploy AI systems that are transparent. Trust in AI comes from repeated accurate and understandable evidence for responses provided by the system. While “Explainable AI” is mandatory for most industries like healthcare, judicial systems or organisations that needs to be compliant to GDPR or FDA, organisations should ensure transparency of AI algorithms not because of external requirements, but rather because it is the responsible thing to do.


AI systems are only as good as the data we put into them. Incorrectly biased data can cause AI systems to generate unfair outcomes with potential catastrophic end results qualified candidates can be disregarded for employment, while others can be subjected to unfair treatment in areas such as education or financial lending. As humans and AI increasingly work together to make decisions, we cannot focus on the technology of the equation alone. We must also consider the human impact to ensure that the people designing and developing the technology are representative of the societies in which the technology is intended to operate.

Detecting and removing negative bias is not just about the machines. There is a virtuous cycle to ensuring that negative human biases are not replicated or amplified by AI. The more we work to understand AI bias, the better we get at recognizing our own bias. The more we inject bias detection mechanisms into AI, the more AI will be able to help us be less biased ourselves, as we will be alerted when the AI senses a deviation from a fair behaviour.

Putting Controls around AI

 As AI becomes increasingly ubiquitous in all aspects of our lives, ensuring we’re developing and training these systems with data that is fair, interpretable and unbiased is critical. In that same spirit, in October this year, IBM has released “AI Open Scale” which detects bias and explains how your AI makes decisions. This works with models built from a wide variety of machine learning frameworks which includes IBM’s Watson and other popular AI frameworks used by enterprises today.

AI offers enormous potential to transform businesses, solve some of our toughest problems and inspire the world to a better future. While we are still in the early stages of this technological revolution, it is already proving its worth every day. One of the biggest challenges of our time is how we harness the power of any new technology to grow global prosperity without leaving people behind. Creating “Explainable AI” will allow humans and AI to grow together at scale in a responsible manner.






Read Also

Mashreq's Extraordinary Technology Innovation

Alexander Raiff, Group Head of Technology, Transformation and Information, Mashreq Bank

Mobile-Centricity Is Banking's 'New Normal'

Lyndon Subroyen, Global Head of Digital and Technology, Investec [LSE: INVP]

Marketing Advocating for the Customers, Capturing The Essence of Customer Ambition

David Hirsch, Head of Marketing, QBE Insurance (ASX: QBE)

Use Modern Technologies To Build Trust With Your Customers

Natawat Saigosoom, EVP, Customer Experience, SCB

Combating IoT Challenges with Smart Choices

Sandeep Babbar, Head of Technology Innovation, GWA Group Ltd & Author

Security In The Cloud Requires A New Way Of Thinking

Dan Constantino, Director, Security Operations, Cox Automotive
follow on linkedin follow on twitter

Copyright © 2022 CIOReviewAPAC. All rights reserved.         Contact         |         Subscribe