Harvard Business Review Article: “AI’s Trust Problem”
From a Harvard Business Review Article by Bhaskar Chakravorti:
With tens of billions invested in AI last year and leading players such as OpenAI looking for trillions more, the tech industry is racing to add to the pileup of generative AI models. The goal is to steadily demonstrate better performance and, in doing so, close the gap between what humans can do and what can be accomplished with AI.
There is another gulf, however, that ought to be given equal, if not higher, priority when thinking about these new tools and systems: the AI trust gap. This gap is closed when a person is willing to entrust a machine to do a job that otherwise would have been entrusted to qualified humans. It is essential to invest in analyzing this second, under-appreciated gap — and in what can be done about it — if AI is to be adopted widely.
The AI trust gap can be understood as the sum of the persistent risks (both real and perceived) associated with AI; depending on the application, some risks are more critical. These cover both predictive machine learning and generative AI. According to the Federal Trade Commission, consumers are voicing concerns about AI, while businesses are worried about several near to long term issues. Consider 12 AI risks that are among the most commonly cited across both groups:
- Disinformation
- Safety and security
- The black box problem
- Ethical concerns
- Bias
- Instability
- Hallucinations in LLMs
- Unknown unknowns
- Job loss and social inequalities
- Environmental impact
- Industry concentration
- State overreach
Taken together, the cumulative effect of these risks contribute to broad public skepticism and business concerns about AI deployment. This, in turn, deters adoption. For instance, radiologists hesitate to embrace AI when the black box nature of the technology prevents a clear understanding of how the algorithm makes decisions on medical image segmentation, survival analysis, and prognosis. Ensuring a level of transparency on the algorithmic decision-making process is critical for radiologists to feel they are meeting their professional obligations responsibly — but that necessary transparency is still a long way off. And the black box problem is just one of many risks to worry about. Given similar issues across different application situations and industries, we should expect the AI trust gap to be permanent, even as we get better in reducing the risks.
Learn More, Read the Complete Article (about 4600 words)
Filed under: News
About Gary Price
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.