As a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed.
According to NIST’s Reva Schwartz, the main distinction between the draft and final versions of the publication is the new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used.
Source: DOI: 10.6028/NIST.SP.1270
“Context is everything,” said Schwartz, principal investigator for AI bias and one of the report’s authors. “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”
Bias in AI can harm humans. AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant. It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group. The revised NIST publication acknowledges that while these computational and statistical sources of bias remain highly important, they do not represent the full picture.
A more complete understanding of bias must take into account human and systemic biases, which figure significantly in the new version. Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a person’s neighborhood of residence influencing how likely authorities would consider the person to be a crime suspect. When human, systemic and computational biases combine, they can form a pernicious mixture — especially when explicit guidance is lacking for addressing the risks associated with using AI systems.
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area.
He earned his MLIS degree from Wayne State University in Detroit.
Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com. Gary is also the co-founder of infoDJ an innovation research consultancy supporting corporate product and business model teams with just-in-time fact and insight finding.
From the Associated Press: A roundup of some of the most popular but completely untrue stories and visuals of the week. None of these are legit, even though they were ...
From The Sydney Morning Herald: Authors, illustrators, and editors will be compensated for e-book and audiobook library borrowings for the first time, in a move by the federal government to ...
From the National Archives and Records Administration (NARA): A draft Customer Research Agenda was open for public review and comment in October 2022. “We’re grateful for the feedback we received ...
From MIT Technology Review: Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we’re reading are written by a ...
From the Congressional Research Service: Nearly one in four Americans has a disability, according to 2018 estimates from the U.S. Census Bureau. Congress has recognized that in addition to making ...
From The NY Times: When [Joan] Didion died in 2021 at age 87, the news set off an outpouring of tributes to a writer who fused penetrating insight and idiosyncratic personal voice, ...
Below, Find the Full Text of a Letter Sent to the Carolina Community From Kevin M. Guskiewicz University of North Carolina at Chapel Hill Chancellor Kevin M. Guskiewicz and J. ...
From the Boston Public Library: The Boston Public Library is proud to contribute to the celebration of Black History Month with its annual “Black Is…” booklist. The booklist aims to commemorate ...
From NYU Langone: Researchers at NYU Grossman School of Medicine, in partnership with the Robert Wood Johnson Foundation, unveiled the Congressional District Health Dashboard (CDHD), a new online tool that ...
From a cOAlition S Announcement: Transformative arrangements – including Transformative Agreements and Transformative Journals – were developed to encourage subscription journals to transition to full and immediate open access within a defined timeframe (31st December 2024, ...
From the Library of Congress: The Library of Congress announced today the appointment of Hannah Sommers as the new Associate Librarian for Researcher and Collections Services in the Library Collections and Services Group. In this role, Sommers will lead the future of the Library’s collections and the services it delivers to researchers and users. She will be central ...
As Book Bans Increase Across the Country, a Boston University Scholar is Fighting Back Core’s Library Resources & Technical Services Journal Goes Fully Open Access Digital Image Processing: It’s All ...