The most prominent use of AI, as revealed by both our Claude.ai analysis and our qualitative research with Northeastern, was for curriculum development. Our Claude.ai analysis also surfaced academic research and assessing student performance as the second and third most common uses.
In our surveys, Northeastern faculty reported that another common case was using AI for their own learning (29% of their AI time on average). However, this was not studied in our Claude.ai analysis, given the filtering mechanism and the difficulty of distinguishing between student and educator usage in these learning instances.
Some other particularly interesting uses we discovered in the Claude.ai data include:
Create mock legal scenarios for educational simulations;
Develop vocational education and workforce training content;
Draft recommendation letters for academic or professional applications;
Create meeting agendas and related administrative documents.
Why faculty use AI in these cases
Our qualitative research with Northeastern faculty hints at why educators often gravitate towards these common AI uses:
Automation of a tedious task (“It takes care of the tedious tasks”; helps with “rote portions of fundraising”);
Collaborative thought partner (“AI can find effective ways to explain concepts to students that I had not thought of myself”);
Personalized learning experiences for students (“AI is useful for giving students and me individualized, interactive learning experiences beyond what one instructor could provide”).
Key creations built by educators with the help of Claude.ai, as surfaced by our automated analysis research tool Source: Anthropic
[Clip]
It’s also changing what professors are teaching. In coding, for example, according to one professor, “AI-based coding has completely revolutionized the analytics teaching/learning experience. Instead of debugging commas and semicolons, we can spend our time talking about the concepts around the application of analytics in business.”
More broadly, the ability to evaluate AI-generated content for accuracy is becoming increasingly important. “The challenge is [that] with the amount of AI generation increasing, it becomes increasingly overwhelming for humans to validate and stay on top,” one professor wrote. Professors are keen to help their students build enough expertise in a subject area to have this discernment.
Assessments also are starting to look different. While student cheating and cognitive offloading remain a concern, some educators are rethinking their assessments.
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area.
He earned his MLIS degree from Wayne State University in Detroit.
Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.