GC Must Warn Boards Of AI Risks
November 27, 2023
There are companies investing hundreds of millions of dollars or more into generative AI. They may be developing their own technology and training their own AI models, or leveraging third-party tools.
Either way, writes James Gatto on the SheppardMullin site, there are significant legal issues and business risks that need to be considered as part of a company’s fiduciary obligations and corporate governance.
Boards must be made aware, for example, of privacy, discrimination, and bias issues (vpnMonior reports that Microsoft’s chatbot AI, Tay, was withdrawn after only sixteen hours, because Twitter users directed offensively racist and sexist messages at it, which the bot learned to repeat).
The FTC is on the watch for these issues, and GC should make their companies develop proper safeguards. They should be included in policies and governance on the use of generative AI.
Before investing in a generative AI system, boards must be confident that the data used to train it has been obtained properly, and will be used as intended. Developers are increasingly using AI code generators, which use AI models to auto-complete or suggest code based on developer inputs.
Legal issues can arise when these tools are used. The worst is called “tainting.” Tainting severely devalues software, which is typically trained on open-source software. Most open source licenses permit the user to copy, modify, and redistribute the code, with conditions.
The conditions can range from mere copyright compliance to more substantive requirements. Among the latter are provisions that any software that includes or is derived from the open source must be licensed under the open source license, and the source code for that software must be made freely available.
This is tainting in its purest form. It permits others to copy, modify, and redistribute the software for free.
Get our free daily newsletter
Subscribe for the latest news and business legal developments.