Frontpage » Tackling Risk and Bias in AI

Tackling Risk and Bias in AI

robotic-hand-pushing-dominos-picture-id917034286

May 19, 2022

NIST has published its initial draft of the AI Risk Management Framework (AI RMF), which delineates the risks in the design, development, use and evaluation of AI systems. NIST will produce a second draft for comment and host a third workshop before publishing AI RMF 1.0 in January 2023. 

As the harmful effect of bias in AI becomes more apparent, identifying the source of those biases and managing them becomes a priority. NIST’s updated report Towards a Standard for Identifying and Managing Bias in Artificial Intelligence emphasizes the importance of tackling those biases beyond data sets and machine learning processes to the broader social context.

The FTC has also been studying the impact of AI and the potential roles for policymakers. In December 2021, it published a notice that it was “considering initiating a rulemaking under Section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”  

State legislatures and city councils have been stepping up their efforts as well. The Colorado and Rhode Island General Assemblies are currently considering bills that would restrict insurers’ use of external consumer data, algorithms and predictive models resulting in unfair discrimination.  And New York City and Detroit City Councils have passed regulations to mitigate biases and discriminatory practices of algorithms.  

Share this post:

Find this article interesting?

Sign up for more with a complimentary subscription to Today’s General Counsel magazine.