New White Paper: SAGE Surveys Librarians on Improving the Discoverability of Scholarly Content
The results of a survey of 252 librarians were released today in a new SAGE White Paper (the third in an annual series) that you can access/download here.
The full title of the white paper is:
Improving the Discoverability of Scholarly Content: Academic Library Priorities and Perspectives
by Lettie Y. Conrad and Elisabeth Leonard.
In a blog post Conrad shares some highlights from the paper.
She writes:
- The highest potential for increasing discovery was seen to be indexing in a wide range of search engines. Librarians encourage publishers to “get their metadata out there everywhere,” because there is so much at stake if a resource goes unused.
- The library catalog remains a priority discovery channel for librarians – who are in need of timely, high-quality MARC records from publishers at the point of sale.
- When asked to prioritize publisher efforts, librarians ranked the wide availability of metadata as the most important, followed by collaboration with library systems, standards compliance, and clear statement of content index coverage (transparency).
- These issues are serious enough that a lack of publisher metadata has prevented almost 33% of librarians from purchasing/subscribing to scholarly resources – the same number that reported deciding against resources due to a lack of publisher transparency about their metadata.
Direct to Full Text White Paper (18 pages; PDF)
Comments from Gary Price, infoDOCKET Founder and Editor:
There is a lot of important issues discussed in this white paper. In this post I would like to address one of them that’s mentioned in a section of the report, “What Is Not Discoverable.”
If a user decides to use a library’s discovery service there is a lot of potentially useful material that’s not either available in a timely or easy manner (or both) or will never be accessible via the discovery layer.
If a user finds this to be the case and leaves without what they hoped to find the potential for them to not come back and use the service again increases. Why should they waste their time? What they’re already using (likely Google) is good enough and it’s fast. Some of this might be due to the searcher not utilizing the technology to its fullest extent but that’s another issue.
What I am getting at is that high value reference material like primary documents, speech transcripts, audio files, government and non-profit full text, reports, etc. are not being made available in an expedited matter or at all.
If the researcher reads about a new government report in the news a link to the full text copy should be available within a few days of its release. If they want to review the transcript of a speech given three days earlier by the Prime Minister of the UK it should be there. Print versions of publications (that can be found via a discovery layer) often publish lists and rankings but the web version (if available) might add extra value by allowing for download, different sorts, etc. Are we making them easy to access?
A discovery service should also provide direct links to high quality open web resources and specialty databases based on the query.
For example, if the query indicates the searcher is looking for material about mental health standards around the world, a direct link to MiNDbank from the WHO would be of value.
Doing this would not only bring users access to potentially unknown resources (often free) but it could also bring greater visibility to many of the digitization projects going on around the world. Without awareness and usage sustainability will always be an issue.
Finally, if you’re asking wait, isn’t this what Google does? The answer is both yes and know.
Yes, in the sense that links to many of these resources and even specific items are indexed by Google.
However, accessing the types of resources I’m describing here is often challenging especially if you look at search skills and the fact that every useful resource can’t be found at the top of a web search results page.
These days the Invisible Web or Deep Web that I wrote about with Chris Sherman 14 years ago is every result below number six or seven on a Google search results page. Sad but true.
Libraries and the services we offer should expand on one of Raganathan’s Five Laws of Library Science.
“Save the time of the reader” should be expanded to “Save the time of the reader and/or electronic database user.”
Bottom Line: We don’t need to recreate Google (not even close) but building specialty collections of high quality open web material and making sure it’s available in a discovery layer is needed. Another example of where human collection building/curation adds a large amount of value.
Filed under: Academic Libraries, Companies (Publishers/Vendors), Digital Preservation, Journal Articles, Libraries, News, Patrons and Users, Publishing, Reports, Resources

About Gary Price
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.