Английская Википедия:Heaps' law

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description

Файл:Heaps' Law on "War and Peace".svg
Verification of Heaps' law on War and Peace, as well as a randomly shuffled version of it. Both cases fit well to the Heaps' law with very similar exponents β, but different K.
Файл:Heaps law plot.png
A schematic Heaps-law plot. The x-axis represents the text size, and the y-axis represents the number of distinct vocabulary elements present in the text. Compare the values of the two axes.

In linguistics, Heaps' law (also called Herdan's law) is an empirical law which describes the number of distinct words in a document (or set of documents) as a function of the document length (so called type-token relation). It can be formulated as

<math> V_R(n) = Kn^\beta </math>

where VR is the number of distinct words in an instance text of size n. K and β are free parameters determined empirically. With English text corpora, typically K is between 10 and 100, and β is between 0.4 and 0.6.

The law is frequently attributed to Harold Stanley Heaps, but was originally discovered by Шаблон:Harvs.[1] Under mild assumptions, the Herdan–Heaps law is asymptotically equivalent to Zipf's law concerning the frequencies of individual words within a text.[2] This is a consequence of the fact that the type-token relation (in general) of a homogenous text can be derived from the distribution of its types.[3]

Empirically, Heaps' law is preserved even when the document is randomly shuffled,[4] meaning that it does not depend on the ordering of words, but only the frequency of words.[5] This is used as evidence for deriving Heaps' law from Zipf's law.[4]

Heaps' law means that as more instance text is gathered, there will be diminishing returns in terms of discovery of the full vocabulary from which the distinct terms are drawn.

Deviations from Heaps' law, as typically observed in English text corpora, have been identified in corpora generated with large language models.[6]

Heaps' law also applies to situations in which the "vocabulary" is just some set of distinct types which are attributes of some collection of objects. For example, the objects could be people, and the types could be country of origin of the person. If persons are selected randomly (that is, we are not selecting based on country of origin), then Heaps' law says we will quickly have representatives from most countries (in proportion to their population) but it will become increasingly difficult to cover the entire set of countries by continuing this method of sampling. Heaps' law has been observed also in single-cell transcriptomes[7] considering genes as the distinct objects in the "vocabulary".

See also

Шаблон:Div col

Шаблон:Div col end

References

Citations

Шаблон:Reflist

Sources

Шаблон:Refbegin

Шаблон:Refend

External links

Шаблон:Comp-ling-stub