A nonprofit called Common Crawl is now using its own Web crawler and making a giant copy of the Web that it makes accessible to anyone. The organization offers up over five billion Web pages, available for free so that researchers and entrepreneurs can try things otherwise possible only for those with access to resources on the scale of Google’s.
[Clip]
Common Crawl has so far indexed more than five billion pages, adding up to 81 terabytes of data, made available through Amazon’s cloud computing service. For about $25 a programmer could set up an account with Amazon and get to work crunching Common Crawl data, says Lisa Green, Common Crawl’s director. The Internet Archive, another nonprofit, also compiles a copy of the Web and offers a service called the “Wayback Machine” that can show old versions of a particular page. However, it doesn’t allow anyone to analyze all its data at once in that way.
Common Crawl has already inspired or helped out some new Web startups. TinEye, a “reverse” search engine that finds images similar to one provided by the user, made use of early Common Crawl data to get started. One programmer’s personal project using Common Crawl data to measure how many of the Web’s pages connect to Facebook—some 22 percent, he concluded—led to his securing funding for a startup, Lucky Oyster, based on helping people find useful information in their social data.
Read the Complete Article
Learn More: Visit the Common Crawl Web Site and Take a Look at the Winner’s of Common Crawl’s Code Contest