With regulatory pressure mounting and company reputations at risk, transparency in AI is more important than ever. We break down how explainable AI (XAI) solutions work, identify the major players in the space, and analyze what companies need to know about the emerging tech.
Enterprises are eager to reap the benefits of automation and efficiency promised by artificial intelligence, with more than half reporting that they accelerated their AI efforts amid the Covid-19 pandemic, according to a PWC survey.
However, companies must be prepared for the costly — and even dangerous — consequences of errors caused by opaque algorithms, bad data, and model bias.
These issues forced IBM to scrap a $62M healthcare AI project in 2017. The AI system was designed to parse through electronic health records and recommend the best possible treatments for cancer patients. While the pilot recommended the same treatment plans as physicians 90% of the time, the data used to train it became outdated, which meant that the system could not be approved for clinical use.
Social bias in AI is also a major issue for companies, as it has led to sunk R&D costs and damaged company reputations. For example, in 2018, Amazon shut down an AI-powered recruiting tool after the algorithm was proven to prioritize male candidates.
Now, government contracts are mandating more transparency in algorithms. With enterprises facing mounting regulatory pressure, the stage is set for explainable AI’s (XAI’s) breakout moment. In this report, we illuminate the AI black box and examine the key aspects of the explainable AI market.
TABLE OF CONTENTS
- What is XAI, and why is it important for enterprises?
- Who are the leading explainable AI players?
- How can enterprises implement XAI?
- What do enterprises need to consider for adoption?
- What’s next?