The Economic Times
12,180.3573.45
Stock Analysis, IPO, Mutual Funds, Bonds & More

Black box problem: Humans can't trust AI, US-based Indian scientist feels lack of transparency is the reason

Trust is a major issue with AI because people are the end-users, says Sambit Bhattacharya.

PTI|
Updated: Dec 14, 2019, 03.57 PM IST
0Comments
iStock
robot-human_iStock
Not all algorithms are trustworthy.
NEW DELHI: From diagnosing diseases to categorising huskies, Artificial Intelligence has countless uses but mistrust in the technology and its solutions will persist until people, the "end users", can fully understand all its processes, says a US-based Indian scientist.

Overcoming the "lack of transparency" in the way AI processes information - popularly called the "black box problem" - is crucial for people to develop trust in the technology, said Sambit Bhattacharya who teaches Computer Science at the Fayetteville State University

"Trust is a major issue with Artificial Intelligence because people are the end-users, and they can never have full trust in it if they do not know how AI processes information," Bhattacharya told .

The computer scientist, whose work includes using machine learning (ML) and AI to process images, was a keynote speaker at the recent 4th International and 19th National Conference on Machines and Mechanisms (iNaCoMM 2019) at the Indian Institute of Technology in Mandi.

To buttress his point that users don't always trust solutions provided by AI, Bhattacharya cited the instance of researchers at Mount Sinai Hospital in the US who applied ML to a large database of patient records containing information such as test results and doctor visits.

The 'Deep Patient' software they used had exceptional accuracy in predicting disease, discovering patterns hidden in the hospital data indicating when patients were on the way to different ailments, including cancer, according to a 2016 study published in the journal Nature.


However, Deep Patient had a black box, the researchers said.

It could anticipate the onset of psychiatric disorders like schizophrenia, which the researchers said is difficult for physicians to predict. But the new tool offered no clue on how it does this.

The researchers said the AI tool needed a level of transparency which explained the process behind its predictions that reassure doctors and justified any changes in prescription drugs recommended.

"Many machine learning tools are still black boxes that render verdicts without any accompanying justification," physicians wrote in a study published in the journal BMJ Clinical Research in May.

According to Bhattacharya, even facial recognition systems based on AI may come with black boxes.

"Face recognition is controversial because of the black box problem. It still fails for people with dark skin, and makes mistakes when matching with a database of faces. There are good examples including problems with use cases in legal systems," he explained.

Not all algorithms are trustworthy.

New-Age Chef & Valet Robots To Make Your Life Easy

of 7
Next
Prev
Play Slideshow

Make Way For The Machines

10 Apr, 2019
While the robotic revolution is making a splash around the world, here are a few that aim to make life easier, simulate human emotions and take us closer to the future. While the robotic revolution is making a splash around the world, here are a few that aim to make life easier, simulate human emotions and take us closer to the future.
Next

Bhattacharya mentioned a project at the University of California, Irwine, where a student created an algorithm to categorise photos of huskies and wolves.

The UCI student's algorithm could almost perfectly classify the two canines. However, in later cross-analysis, his professor Sameer Singh found the algorithm was identifying wolves based only on the snow in the image background, and not based on its analysis of the wolf features.

Citing another example, Bhattacharya said, "If you show an image classification algorithm a cat image, the cat comes with a background. So the algorithm could be saying it is a cat based on what it sees in the background that it relates to a cat."

In such cases, "the problem is that the algorithm does not decouple the background with the foreground totally right", he explained.

There is a whole new field dealing with 'AI explainability' that is trying to explain how algorithms make decisions.

For instance, in London, researchers from DeepMind, a subsidiary of Google parent company Alphabet, used deep learning to assign sites of treatment priority looking at patient eye scans.

Their study, published in the journal Nature, noted that the system takes in three-dimensional eye scans, analyses them, and picks cases that need urgent referral.

According to DeepMind researchers, the model gives several possible explanations for each diagnosis, rating each of them, and also shows how it has labelled the parts of the patient's eye.

"Google has invested a lot of effort in developing trustworthy algorithms, or having better algorithms to scrutinize what a deep learning algorithm is doing," Bhattacharyya said.

He added that the Local Interpretable Model Agnostic Explanations (LIME) algorithm is a promising solution for overcoming AI black boxes. LIME lets researchers analyse "input values" the AI system has used to arrive at its conclusion, Bhattacharya said.

He said such algorithms, providing transparency, probe the kind of features which AI uses to draw its conclusions in the first place.

"Generally we are interested in knowing which features were most influential in the decision from the ML algorithm. For example, if it says that a picture is that of a dog, we may find that pointy or floppy ear-features and typical rounded dog nose features are the most important, whereas body hair is not very important," he explained.

Despite emerging solutions to the black box problem, human intervention will still be needed to interpret AI's decisions.

"I believe that in the near future things will be far from perfect. We cannot expect the automatic explanations of AI decisions to be very good. We will need a significant amount of human oversight of the explanation itself, i.e. there will be issues in trusting the explanations themselves," Bhattacharya said.

Walker, Kiki, Lovot: 7 Robot Companions That Offer Healthcare & Emotional Support

of 7
Next
Prev
Play Slideshow

Samsung Bot Care

23 Jan, 2019
Bot Care is the tech giant’s latest healthcare-specialised robot, which can measure your blood pressure, heart rate, respiration and sleeping state, and is even capable of reminding you when it’s time to take your medicine.
Next
Comments
Add Your Comments
Commenting feature is disabled in your country/region.

Popular Categories


Other useful Links


Copyright © 2020 Bennett, Coleman & Co. Ltd. All rights reserved. For reprint rights: Times Syndication Service