Legal Operations » AI Lacks Empathy, Needs Humans in the Middle

AI Lacks Empathy, Needs Humans in the Middle

September 29, 2022

justice-gavel-on-laptop-computer-keyboard-picture-id931025190

Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. But it fails to capture or respond to intangible human factors that go into real-life decision-making. The question then is whether AI can introduce more subjective experiences, feelings, and empathy. Business and technical leaders need to ensure that their AI systems have the necessary checks and balances as well as consistent human oversight to ensure that AI is ethical and moral. To help assure greater humanity in these systems, leaders need to encourage and build an organizational culture and training that promotes ethics in AI decisions, remove bias from the data, keep humans in the loop, validate algorithms in real-world scenarios, and teach the systems human values.

New AI systems like DALL-E, language transformers, and vision/deep learning models are coming close to matching human abilities, yet AI has a long way to go. We still need humans in the middle. The bottom line is, AI is based on algorithms that responds to models and data. It isn’t ready to assume human qualities that emphasize empathy, ethics, and morality.

Get our free daily newsletter

Subscribe for the latest news and business legal developments.

Scroll to Top