
In this episode of The New Stack Makers podcast, we chat with Camille Eddy, whose interest in Explainable AI (XAI) started during an internship on advanced robotics at Hewlett Packard, where she became concerned about the effect of the lack of diversity in social robots.
Her talk “Recognizing Cultural Bias in AI,” given at the 2018 O’Reilly Open Source Conference started with this question: “How could something as unchanging as the color of a person’s skin prevent them from being able to enjoy any product I create?”
When we think about how we live and going around interacting with technology, Eddy said, we don’t always think of the interactions of other people. For example, the famous epic fail of a hand soap dispenser at Facebook HQ not dispensing soap to a black hand because it hadn’t been calibrated to see that skin color.
As an engineer, Eddy looks at unintended consequences for the technology that she makes.
Explainable AI
Although she primary sees herself as a mechanical engineer, Eddy has also been drawn to artificial intelligence (AI). She’s often playing with both sides at the same time. “Nobody looked at me weird when I had a bunch of lines of code open, and nobody looked at me weird when I was in the machine shop,” she said.
This perspective has allowed her to have the conversation, first internally and then with other people, about how the software is being put into the hardware and how the software goes out to customers influences their experience as a whole.
The idea of understanding the idea of WHY an algorithm made the decision that it made is the essence of XAI. We’ve been seeing examples from not recognizing skin tones or making conclusions, but not understanding why an AI algorithm made that conclusion, she said.
As the industry is tackling the ideas about creating better tools and better understanding and awareness in AI, XAI or fairness tools have started emerging.
You have to be really careful to make sure the results you’re getting are actually the results you are intending to get, she said. Eddy cited a study from the University of Washington where they thought they were counting wolves versus huskies, but realized the algorithm was actually counting snow vs. no snow in the trading photos. The algorithm was working correctly, just not as the researchers intended.
Hence, the recent slew of Fairness Tools from Google and IBM. Google’s What-If tool, allows you to change levers and move around to understand the context of your data, look at the data as a whole instead of focusing on certain points, or asking specific questions like am I doing this right? Have I factored in the all use cases I need to?
For a researcher to be able to say, here’s my research here’s the conclusion, and here are all the data points and answer questions as to the reasoning behind the results.
It’s a demonstrable method to see why the algorithm is coming up with the results it’s returning and a way of looking inside the black box that is AI. “We can make informed decisions,” said Eddy, “and people can go back and understand where those decisions came from.”
Listen in to hear more about the huskies vs. snow project and how the way we use algorithms can have serious consequences for our personal knowledge.
If you’re interested in learning more, Ms. Eddy will be teaching an 8 hour tutorial on using fairness in XAI tools later this year at the Agile Testing Conference.
In this Edition:
3:35: An engineer being at the intersection of hardware and software.
5:53: “Explainable AI” and what it is.
10:20: Looking at the data from all sides.
13:04: The way we use algorithms can have serious consequences for our personal knowledge.
16:53: Applying values to the software that we write.
19:42: Discussing the rise in algorithms.
Feature image via Pixabay.
The post Explainable AI: Looking into the Black Box appeared first on The New Stack.