Compliance » Proposed Framework To Address AI Risk Released By NIST

Proposed Framework To Address AI Risk Released By NIST

A ghostly image of the mythical lady holding the scales of justice, comprised of white zeros and ones on a black background.

February 13, 2023

The initial version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0) has been released by the National Institute of Standards and Technology (NIST). This document is one of the first attempts anywhere to codify a “risk framework” in this new field, and it could prove influential in the development of global standards for AI governance, according to a post from law firm Hogan Lovells. It could be especially relevant to companies that want to get a jump start on developing internal controls to address pending regulations, like the EU’s proposed AI Act. The framework defines what it designates as the “seven characteristics of trustworthy AI systems.” They include fairness and effective management of potential bias, explainability, and of course security.

The NIST document includes an entire section on the measurement of AI risk, including the difficulty presented by what probably goes to the heart of the issue at this point, “inscrutability,” and the problems inherent in the fact that risks as determined in a laboratory “may differ from risks that emerge in operational, real-world settings.”

The Hogan Lovells post also notes the NIST document could be the starting point for many “sector-specific” management frameworks in the U.S., and suggests some steps companies can take based on that likelihood. -Today’s General Counsel/DR

Share this post:

Find this article interesting?

Sign up for more with a complimentary subscription to Today’s General Counsel magazine.