Legal Operations » Companies Need Explainable AI

Companies Need Explainable AI

application-of-smartphone-with-business-graph-and-analytics-data-on-vector-id1193278024

September 15, 2022

The only way to gather insights from the vast amount of data that exists today is through artificial intelligence. AI identifies complex mathematical patterns found in thousands of variables and the relations among those variables. The insights gathered help companies make predictions. Nevertheless, AI can be a black box, and we are often unable to answer crucial questions about its operations. While we understand its inputs (variables) and outputs (analyses or predictions), we might not understand such questions as: Is it making reliable predictions? Is the AI making those predictions on solid or justified grounds? What we need is an explainable AI, that is, an AI whose predictions can be explained.

In general, companies need explainability in AI when (1) regulation requires it, (2) it’s important for understanding how to use the tool, (3) it could improve the system, and (4) it can help determine fairness. Organizations should create a framework allowing them to prioritize explainability in each of their AI projects. The framework would enable data scientists to build models that work and empower executives to make informed decisions about what should be designed and when systems are reliable enough to deploy.

Share this post:

Find this article interesting?

Sign up for more with a complimentary subscription to Today’s General Counsel magazine.