Английская Википедия:GPT-1

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Third-party Шаблон:Infobox software Шаблон:See also Шаблон:Machine learning

Файл:Full GPT architecture.png
Original GPT architecture

Generative Pre-trained Transformer 1 (GPT-1) was the first of OpenAI's large language models following Google's invention of the transformer architecture in 2017.[1] In June 2018, OpenAI released a paper entitled "Improving Language Understanding by Generative Pre-Training",[2] in which they introduced that initial model along with the general concept of a generative pre-trained transformer.[3]

Up to that point, the best-performing neural NLP models primarily employed supervised learning from large amounts of manually labeled data. This reliance on supervised learning limited their use of datasets that were not well-annotated, in addition to making it prohibitively expensive and time-consuming to train extremely large models;[2][4] many languages (such as Swahili or Haitian Creole) are difficult to translate and interpret using such models due to a lack of available text for corpus-building.[4] In contrast, a GPT's "semi-supervised" approach involved two stages: an unsupervised generative "pre-training" stage in which a language modeling objective was used to set initial parameters, and a supervised discriminative "fine-tuning" stage in which these parameters were adapted to a target task.[2]

The use of a transformer architecture, as opposed to previous techniques involving attention-augmented RNNs, provided GPT models with a more structured memory than could be achieved through recurrent mechanisms; this resulted in "robust transfer performance across diverse tasks".[2]

Reason for choosing BookCorpus

BookCorpus was chosen as a training dataset partly because the long passages of continuous text helped the model learn to handle long-range information.[5] It contained over 7,000 unpublished fiction books from various genres. The rest of the datasets available at the time, while being larger, lacked this long-range structure (being "shuffled" at a sentence level).[2]

The BookCorpus text was cleaned by the ftfy library to standardized punctuation and whitespace and then tokenized by spaCy.[2]

Architecture

The GPT-1 architecture was a twelve-layer decoder-only transformer, using twelve masked self-attention heads, with 64-dimensional states each (for a total of 768). Rather than simple stochastic gradient descent, the Adam optimization algorithm was used; the learning rate was increased linearly from zero over the first 2,000 updates to a maximum of 2.5×10−4, and annealed to 0 using a cosine schedule.[2] GPT-1 has 117 million parameters.[3]

While the fine-tuning was adapted to specific tasks, its pre-training was not; to perform the various tasks, minimal changes were performed to its underlying task-agnostic model architecture.[2] Despite this, GPT-1 still improved on previous benchmarks in several language processing tasks, outperforming discriminatively-trained models with task-oriented architectures on several diverse tasks.[2]

Performance and evaluation

GPT-1 achieved a 5.8% and 1.5% improvement over previous best results[2] on natural language inference (also known as textual entailment) tasks, evaluating the ability to interpret pairs of sentences from various datasets and classify the relationship between them as "entailment", "contradiction" or "neutral".[2] Examples of such datasets include QNLI (Wikipedia articles) and MultiNLI (transcribed speech, popular fiction, and government reports, among other sources);[6] It similarly outperformed previous models on two tasks related to question answering and commonsense reasoning—by 5.7% on RACE,[7] a dataset of written question-answer pairs from middle and high school exams, and by 8.9% on the Story Cloze Test.[8]

GPT-1 improved on previous best-performing models by 4.2% on semantic similarity (or paraphrase detection), evaluating the ability to predict whether two sentences are paraphrases of one another, using the Quora Question Pairs (QQP) dataset.[2]

GPT-1 achieved a score of 45.4, versus a previous best of 35.0[2] in a text classification task using the Corpus of Linguistic Acceptability (CoLA). Finally, GPT-1 achieved an overall score of 72.8 (compared to a previous record of 68.9) on GLUE, a multi-task test.[9]

References

Шаблон:Reflist

Шаблон:OpenAI navbox Шаблон:Differentiable computing

  1. Шаблон:Cite journal
  2. 2,00 2,01 2,02 2,03 2,04 2,05 2,06 2,07 2,08 2,09 2,10 2,11 2,12 Ошибка цитирования Неверный тег <ref>; для сносок gpt1paper не указан текст
  3. 3,0 3,1 Шаблон:Cite web
  4. 4,0 4,1 Ошибка цитирования Неверный тег <ref>; для сносок tsvetkov не указан текст
  5. Ошибка цитирования Неверный тег <ref>; для сносок bookscorpus не указан текст
  6. Ошибка цитирования Неверный тег <ref>; для сносок multinli не указан текст
  7. Ошибка цитирования Неверный тег <ref>; для сносок race не указан текст
  8. Ошибка цитирования Неверный тег <ref>; для сносок cloze не указан текст
  9. Ошибка цитирования Неверный тег <ref>; для сносок glue не указан текст