Forrester recommends that business leaders looking to enhance their enterprise AI adoption should pull on the seven levers of trusted AI. One of these levers is transparency, which Forrester defines as “the perception that an AI system is leading to decisions in an open and traceable way and is making every effort to share verifiable information about how it operates.”
Now, it may seem like the need for AI explainability is a thing of the past. Enterprises are rapidly adopting generative AI large language models, despite the fact they are inherently opaque. However, generative AI is rarely making critical operational or customer decisions; more explainable predictive AI algorithms like random forests and neural networks still power most critical business decision-making and will continue to do so. Moreover, emerging regulations such as the anticipated EU AI Act will require businesses using AI (generative or otherwise) to provide explainability mirroring their use cases’ risk levels, with penalties of up to 7% of global annual turnover.
And we can’t forget that a lack of transparency is a driving factor for distrust in artificial intelligence systems. Users criticized OpenAI for not disclosing training details used in its GPT-4 model. This lack of transparency in AI systems can cause wariness and weak adoption from business users uncertain of how opaque AI systems come to their conclusions. Poor transparency can also be a source of lawsuits, with consumers, companies, and content creators all seeking to know how their data is being used or how outcomes are determined.
For business leaders looking to create transparency in their AI models, explainable AI technologies have emerged as a solution to engender stakeholder trust. Explainable AI techniques cover several approaches from improving model transparency to interpretability of opaque models. Business users may access these techniques in a variety of ways such as through responsible AI solutions, AI service providers, and open source models. Data scientists use these solutions to understand how AI models generate their outputs, ultimately creating trust with business stakeholders by ensuring that models deliver their recommendations for the right reasons. Explainable AI technologies also help produce the documentation regulators seek when auditing an organization’s AI.
For these reasons, explainable AI was named one of the top 10 emerging technologies of 2023. To learn more about explainable AI, its use cases, and our forecast for its maturity, check out our emerging technology research, where we break down explainable AI as well as nine other top emerging technologies.
First published on Forrester Blog