November 23, 2020

A Brief Look: “Archiving the BBC’s Website and Social Media Output”

From the BBC Internet Blog:

In terms of capturing pages from bbc.co.uk for our web archive collections we currently have a 3 point approach:

  • WARC Web Crawling: This form of web archiving downloads and preserves the pages in the international standard of “WARC”. These WARC files can then be brought back to life on software, allowing users and researchers to view and interact with the website as if it were ‘live’ (clicking links and browsing). Web crawling captures a point in time, and currently we aim to do a high quality crawl of selected parts of the BBC website once a year.
  • PDF Web Crawling: In addition to the WARC files, we ensure each page captured also has a PDF, thus we are not solely bound by the WARC technology. That way we can also share PDFs for internal research. As it’s a universal method of viewing documents, preservation is more straightforward in the future.
  • Screencasting: Lastly we look to take a screencast of some of our websites, especially when they have been redesigned, to capture the look & feel of the site in a video (a bit like a software tutorial or a computer game walkthrough you often find on YouTube). This consists of someone recording their screen while browsing a part of the BBC Website, and this walkthrough is then archived alongside our other AV archive collections. Essentially it’s an historical record of how the site behaved.

Read the Complete Article

About Gary Price

Gary Price (gprice@mediasourceinc.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. Before launching INFOdocket, Price and Shirl Kennedy were the founders and senior editors at ResourceShelf and DocuTicker for 10 years. From 2006-2009 he was Director of Online Information Services at Ask.com, and is currently a contributing editor at Search Engine Land.

Share