Английская Википедия:Common Crawl

Материал из Онлайн справочника
Версия от 18:12, 20 февраля 2024; EducationBot (обсуждение | вклад) (Новая страница: «{{Английская Википедия/Панель перехода}} {{Short description|Nonprofit web crawling organization}} {{Infobox dot-com company | name = Common Crawl | company_type = 501(c)(3) non-profit | traded_as = | foundation = | dissolved = | location = San Francisco, California; Los Angeles, California, United States | incorporated = | founder = Gil Elbaz | chairman = | president = | CEO = | key_people = Peter No...»)
(разн.) ← Предыдущая версия | Текущая версия (разн.) | Следующая версия → (разн.)
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Infobox dot-com company

Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public.[1][2] Common Crawl's web archive consists of petabytes of data collected since 2008.[3] It completes crawls generally every month.[4]

Common Crawl was founded by Gil Elbaz.[5] Advisors to the non-profit include Peter Norvig and Joi Ito.[6] The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available.

The Common Crawl dataset includes copyrighted work and is distributed from the US under fair use claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the common crawl dataset to work around copyright law in other legal jurisdictions.[7]

As of March 2023, in the most recent version of the Common Crawl dataset, 46% of documents had English as their primary language (followed by German, Russian, Japanese, French, Spanish and Chinese, all below 6%).[8]

History

Amazon Web Services began hosting Common Crawl's archive through its Public Data Sets program in 2012.[9]

The organization began releasing metadata files and the text output of the crawlers alongside .arc files in July of that year.[10] Common Crawl's archives had only included .arc files previously.[10]

In December 2012, blekko donated to Common Crawl search engine metadata blekko had gathered from crawls it conducted from February to October 2012.[11] The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive SEO."[11]

In 2013, Common Crawl began using the Apache Software Foundation's Nutch webcrawler instead of a custom crawler.[12] Common Crawl switched from using .arc files to .warc files with its November 2013 crawl.[13]

A filtered version of Common Crawl was used to train OpenAI's GPT-3 language model, announced in 2020.[14]

Timeline of Common Crawl data

The following data have been collected from the official Common Crawl Blog.[15]

Crawl date Size in TiB Billions of pages Comments
June 2023 390 3.1 Crawl conducted from May 27 to June 11, 2023
April 2023 400 3.1 Crawl conducted from March 20 to April 2, 2023
February 2023 400 3.15 Crawl conducted from January 26 to February 9, 2023
December 2022 420 3.35 Crawl conducted from November 26 to December 10, 2022
October 2022 380 3.15 Crawl conducted in September and October 2022
April 2021 320 3.1
November 2018 220 2.6
October 2018 240 3.0
September 2018 220 2.8
August 2018 220 2.65
July 2018 255 3.25
June 2018 235 3.05
May 2018 215 2.75
April 2018 230 3.1
March 2018 250 3.2
February 2018 270 3.4
January 2018 270 3.4
December 2017 240 2.9
November 2017 260 3.2
October 2017 300 3.65
September 2017 250 3.01
August 2017 280 3.28
July 2017 240 2.89
June 2017 260 3.16
May 2017 250 2.96
April 2017 250 2.94
March 2017 250 3.07
February 2017 250 3.08
January 2017 250 3.14
December 2016 2.85
October 2016 3.25
September 2016 1.72
August 2016 1.61
July 2016 1.73
June 2016 1.23
May 2016 1.46
April 2016 1.33
February 2016 1.73
November 2015 151 1.82
September 2015 106 1.32
August 2015 149 1.84
July 2015 145 1.81
June 2015 131 1.67
May 2015 159 2.05
April 2015 168 2.11
March 2015 124 1.64
February 2015 145 1.9
January 2015 139 1.82
December 2014 160 2.08
November 2014 135 1.95
October 2014 254 3.7
September 2014 220 2.8
August 2014 200 2.8
July 2014 266 3.6
April 2014 183 2.6
March 2014 223 2.8 First Nutch crawl
January 2014 148 2.3 Crawls performed monthly
November 2013 102 2 Data in Warc file format
July 2012 Data in Arc file format
January 2012 Public Data Set of Amazon Web Services
November 2011 40 5 First availability on Amazon

Norvig Web Data Science Award

In corroboration with SURFsara, Common Crawl sponsors the Norvig Web Data Science Award, a competition open to students and researchers in Benelux.[16][17] The award is named for Peter Norvig who also chairs the judging committee for the award.[16]

Google Colossal Clean Crawled Corpus

Google's version of the Common Crawl is called the Colossal Clean Crawled Corpus, or C4 for short.[18][19]

References

Шаблон:Reflist

External links