Home / Career Development / Keys To Managing The “Perplexing” Risk Of AI

Keys To Managing The “Perplexing” Risk Of AI

Thinkers in bubbles at a high altitude

October 6, 2021

Federal agencies are gearing up and regulation of AI is inevitable, according to this post from law firm Crowell & Moring, and numerous agencies have been actively engaged on the topic during the normally quiescent bureaucratic summer in DC. They include the National Highway Traffic Safety Administration (evaluating AI used for steering vehicles), the SEC (looking at “digital engagement practices” used by investors and wealth managers to drive customers to higher-revenue products), and other regulators looking at possible new AI standards for financial institutions.

AI manifests risk in perplexing ways, the authors say, but entities including the GAO, the Office of the Director of National Intelligence (with its “Principles of Artificial Intelligence Ethics for the Intelligence Community) and the European Commission’s High-Level Expert Group on Artificial Intelligence, with its “Ethics Guidelines for Trustworthy AI” are said to be offering practical guidance, with suggested risk-management frameworks. Based on these and other published material, the writers formulate a seven-step approach for organizations “to extend their existing risk-management frameworks for humans to AI.”

The list begins with a focus on initial design, and includes a kind of due diligence (with performance reviews) that is similar to what would be applied to a new employee or third-party vendor, and includes cybersecurity measures and insurance coverage. It concludes with “transfer, termination, and retirement procedures.” In short, the writers say, the best way to create an effective risk management strategy for AI is to treat it like a human.

 

Read full article at:

Share this post:

[if (mso 16)]
[if (mso 16)]
[if gte mso 9]
[if gte mso 9]
[x-apple-data-detectors]
[x-apple-data-detectors]
[data-ogsb]
[data-ogsb]
[class="gmail-fix"]
[class="gmail-fix"]
[if mso]
[if mso]