Report: AI Chatbots Can Be Exploited to Extract More Personal Information
AI Chatbots that provide human-like interactions are used by millions of people every day, however new research has revealed that they can be easily manipulated to encourage users to reveal even more personal information.
Intentionally malicious AI chatbots can influence users to reveal up to 12.5 times more of their personal information, a new study by King’s College London has found.
For the first time, the research shows how conversational AI (CAIs) programmed to deliberately extract data can successfully encourage users to reveal private information using known prompt techniques and psychological tools.
[Clip]
The researchers are keen to emphasise that manipulating these models is not a difficult process. Many companies allow access to the base models underpinning their CAIs and people can easily adjust them without much programming knowledge or experience.
[Clip]
The study is being presented for the first time at the 34th USENIX security symposium in Seattle.
Learn More, Read the Complete Summary Article
See Also: Malicious LLM-Based Conversational AI Makes Users Reveal Personal Information (via KCL)
Filed under: Data Files, News, Patrons and Users
About Gary Price
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.


