Tackling Risk and Bias in AI
May 19, 2022

NIST has published its initial draft of the AI Risk Management Framework (AI RMF), which delineates the risks in the design, development, use and evaluation of AI systems. NIST will produce a second draft for comment and host a third workshop before publishing AI RMF 1.0 in January 2023.
As the harmful effect of bias in AI becomes more apparent, identifying the source of those biases and managing them becomes a priority. NIST’s updated report Towards a Standard for Identifying and Managing Bias in Artificial Intelligence emphasizes the importance of tackling those biases beyond data sets and machine learning processes to the broader social context.
The FTC has also been studying the impact of AI and the potential roles for policymakers. In December 2021, it published a notice that it was “considering initiating a rulemaking under Section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”
State legislatures and city councils have been stepping up their efforts as well. The Colorado and Rhode Island General Assemblies are currently considering bills that would restrict insurers’ use of external consumer data, algorithms and predictive models resulting in unfair discrimination. And New York City and Detroit City Councils have passed regulations to mitigate biases and discriminatory practices of algorithms.
Get our free daily newsletter
Subscribe for the latest news and business legal developments.
Read this next
Top 100 Litigator Sues Blue Cross Over His Cancer Treatment
In 2018, Robert Salim, 67, realized he was seriously ill. After numerous […]
Financial Industry Suing to Foil New Regulations
New rules aimed at lenders, investment funds, and other financial entities would […]
GC Must Warn Boards Of AI Risks
There are companies investing hundreds of millions of dollars or more into […]