Английская Википедия:Hallucination (artificial intelligence)

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Use dmy dates

Файл:ChatGPT hallucination.png
ChatGPT summarizing a non-existent New York Times article based on a fake URL

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation[1] or delusion[2]) is a response generated by an AI which contains false or misleading information presented as fact.[3][4][5]

For example, a hallucinating chatbot might, when asked to generate a financial report for a company, falsely state that the company's revenue was $13.6 billion (or some other number apparently "plucked from thin air").[6] Such phenomena are termed "hallucinations", in loose analogy with the phenomenon of hallucination in human psychology. However, one key difference is that human hallucination is usually associated with false percepts, but an AI hallucination is associated with the category of unjustified responses or beliefs.[5] Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[1]

AI hallucination gained prominence during the AI boom, alongside the rollout of widely used chatbots based on large language models (LLMs), such as ChatGPT.[7] Users complained that such chatbots often seemed to pointlessly embed plausible-sounding random falsehoods within their generated content.[8] By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with some estimating chatbots hallucinate as much as 27% of the time[9][10] and a study finding factual errors in 46% of generated responses.[11]

In natural language processing

Файл:ChatPGTLojbanLion123.png
A translation on the Vicuna LLM test bed of English into the constructed language Lojban, and then back into English in a new round, generates a surreal artifact from Genesis 1:6 (RSV).

In natural language processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content". There are different ways to categorize hallucinations. Depending on whether the output contradicts the source or cannot be verified from the source, they are divided into intrinsic and extrinsic, respectively.[5] Depending on whether the output contradicts the prompt or not they could be divided into closed-domain and open-domain respectively.[12]

Causes

There are several reasons for natural language models to hallucinate data.[5]

Hallucination from data

The main cause of hallucination from data is source-reference divergence. This divergence happens 1) as an artifact of heuristic data collection or 2) due to the nature of some NLG tasks that inevitably contain such divergence. When a model is trained on data with source-reference(target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.[5]

Hallucination from modeling

Hallucination was shown to be a statistically inevitable byproduct of any imperfect generative model that is trained to maximize training likelihood, such as GPT-3, and requires active learning (such as reinforcement learning from human feedback) to be avoided.[13] Other research takes an anthropomorphic perspective and posits hallucinations as arising from a tension between novelty and usefulness. For instance, Teresa Amabile and Pratt define human creativity as the production of novel and useful ideas.[14] By extension, a focus on novelty in machine creativity can lead to production of original but inaccurate responses, i.e. falsehoods, whereas a focus on usefulness can result in ineffectual rote memorized responses.[15]

Errors in encoding and decoding between text and representations can cause hallucinations. When encoders learn the wrong correlations between different parts of the training data, it could result in an erroneous generation that diverges from the input. The decoder takes the encoded input from the encoder and generates the final target sequence. Two aspects of decoding contribute to hallucinations. First, decoders can attend to the wrong part of the encoded input source, leading to erroneous generation. Second, the design of the decoding strategy itself can contribute to hallucinations. A decoding strategy that improves the generation diversity, such as top-k sampling, is positively correlated with increased hallucination.Шаблон:Citation needed

Pre-training of models on a large corpus is known to result in the model memorizing knowledge in its parameters, creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words (including the words it has itself previously generated during the same conversation), causing a cascade of possible hallucination as the response grows longer.[5] By 2022, papers such as The New York Times expressed concern that, as adoption of bots based on large language models continued to grow, unwarranted user confidence in bot output could lead to problems.[16]

Impact

In July 2021, Meta warned during its release of BlenderBot 2 that the system was prone to "hallucinations", which Meta defined as "confident statements that are not true".[17][18] On 15 November 2022, Meta unveiled a demo of Galactica, designed to "store, combine and reason about scientific knowledge". Content generated by Galactica came with the warning "Outputs may be unreliable! Language Models are prone to hallucinate text." In one case, when asked to draft a paper on creating avatars, Galactica cited a fictitious paper from a real author who works in the relevant area. Meta withdrew Galactica on 17 November due to offensiveness and inaccuracy.[19]

ChatGPT

OpenAI's ChatGPT, released in beta-version to the public on November 30, 2022, is based on the foundation model GPT-3.5 (a revision of GPT-3). Professor Ethan Mollick of Wharton has called ChatGPT an "omniscient, eager-to-please intern who sometimes lies to you". Data scientist Teresa Kubacka has recounted deliberately making up the phrase "cycloidal inverted electromagnon" and testing ChatGPT by asking it about the (nonexistent) phenomenon. ChatGPT invented a plausible-sounding answer backed with plausible-looking citations that compelled her to double-check whether she had accidentally typed in the name of a real phenomenon. Other scholars such as Oren Etzioni have joined Kubacka in assessing that such software can often give you "a very impressive-sounding answer that's just dead wrong".[20]

When CNBC asked ChatGPT for the lyrics to "Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics.[21] Asked questions about New Brunswick, ChatGPT got many answers right but incorrectly classified Samantha Bee as a "person from New Brunswick".[22] Asked about astrophysical magnetic fields, ChatGPT incorrectly volunteered that "(strong) magnetic fields of black holes are generated by the extremely strong gravitational forces in their vicinity". (In reality, as a consequence of the no-hair theorem, a black hole without an accretion disk is believed to have no magnetic field.)[23] Fast Company asked ChatGPT to generate a news article on Tesla's last financial quarter; ChatGPT created a coherent article, but made up the financial numbers contained within.[6]

Other examples involve baiting ChatGPT with a false premise to see if it embellishes upon the premise. When asked about "Harold Coward's idea of dynamic canonicity", ChatGPT fabricated that Coward wrote a book titled Dynamic Canonicity: A Model for Biblical and Theological Interpretation, arguing that religious principles are actually in a constant state of change. When pressed, ChatGPT continued to insist that the book was real.[24] Asked for proof that dinosaurs built a civilization, ChatGPT claimed there were fossil remains of dinosaur tools and stated "Some species of dinosaurs even developed primitive forms of art, such as engravings on stones".[25] When prompted that "Scientists have recently discovered churros, the delicious fried-dough pastries... (are) ideal tools for home surgery", ChatGPT claimed that a "study published in the journal ScienceШаблон:-" found that the dough is pliable enough to form into surgical instruments that can get into hard-to-reach places, and that the flavor has a calming effect on patients.[26][27]

By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with a Google executive identifying hallucination reduction as a "fundamental" task for ChatGPT competitor Google Bard.[9][28] A 2023 demo for Microsoft's GPT-based Bing AI appeared to contain several hallucinations that went uncaught by the presenter.[9]

In May 2023, it was discovered Stephen Schwartz submitted six fake case precedents generated by ChatGPT in his brief to the Southern District of New York on Mata v. Avianca, a personal injury case against the airline Avianca. Schwartz said that he had never previously used ChatGPT, that he did not recognize the possibility that ChatGPT's output could have been fabricated, and that ChatGPT continued to assert the authenticity of the precedents after their nonexistence was discovered.[29] In response, Brantley Starr of the Northern District of Texas banned the submission of AI-generated case filings that have not been reviewed by a human, noting that:[30][31]

Шаблон:Blockquote

On June 23 P. Kevin Castel tossed the Mata case and issued a $5,000 fine to Schwartz and another lawyer—who had both continued to stand by the fictitious precedents despite Schwartz's previous claims—for bad faith conduct. Castel characterized numerous errors and inconsistencies in the opinion summaries, describing one of the cited opinions as "gibberish" and "[bordering] on nonsensical".[32]

In June 2023, Mark Walters, a gun rights activist and radio personality, sued OpenAI in a Georgia state court after ChatGPT mischaracterized a legal complaint in a manner alleged to be defamatory against Walters. The complaint in question was brought in May 2023 by the Second Amendment Foundation against Washington attorney general Robert W. Ferguson for allegedly violating their freedom of speech, whereas the ChatGPT-generated summary bore no resemblance and claimed that Walters was accused of embezzlement and fraud while holding a Second Amendment Foundation office post that he never held in real life. According to AI legal expert Eugene Volokh, OpenAI may be shielded against this claim by Section 230, unless the court finds that OpenAI "materially contributed" to the publication of defamatory content.[33]

Scientific research

AI models can cause problems in the world of academic and scientific research due to their hallucinations. Specifically, models like ChatGPT have been recorded in multiple cases to cite sources for information that are either not correct or do not exist. A study conducted in the Cureus Journal of Medical Science showed that out of 178 total references cited by GPT-3, 69 returned an incorrect or nonexistent DOI. An additional 28 had no known DOI nor could be located in a Google search.[34]

Another instance of this occurring was documented by Jerome Goddard from Mississippi State University. In an experiment, ChatGPT had provided questionable information about ticks. Unsure about the validity of the response, they inquired about the source that the information had been gathered from. Upon looking at the source, it was apparent that not only had the DOI been hallucinated, but the names of the authors as well. Some of the authors were contacted and confirmed that they had no knowledge of the paper's existence whatsoever.[35] Goddard says that, "in [ChatGPT's] current state of development, physicians and biomedical researchers should NOT ask ChatGPT for sources, references, or citations on a particular topic. Or, if they do, all such references should be carefully vetted for accuracy."[35] The use of these language models is not ready for fields of academic research and that their use should be handled carefully[36]

On top of providing incorrect or missing reference material, ChatGPT also has issues with hallucinating the contents of some reference material. A study that analyzed a total of 115 references provided by ChatGPT documented that 47% of them were fabricated. Another 46% cited real references but extracted incorrect information from them. Only the remaining 7% of references were cited correctly and provided accurate information. ChatGPT has also been observed to "double-down" on a lot of the incorrect information. When you ask ChatGPT about a mistake that may have been hallucinated, sometimes it will try to correct itself but other times it will claim the response is correct and provide even more misleading information.[37]

These hallucinated articles generated by language models also pose an issue because it is difficult to tell whether an article was generated by an AI. To show this, a group of researchers at the Northwestern University of Chicago generated 50 abstracts based on existing reports and analyzed their originality. Plagiarism detectors gave the generated articles an originality score of 100%, meaning that the information presented appears to be completely original. Other software designed to detect AI generated text was only able to correctly identify these generated articles with an accuracy of 66%. Research scientists had a similar rate of human error, identifying these abstracts at a rate of 68%.[38] From this information, the authors of this study concluded, "[t]he ethical and acceptable boundaries of ChatGPT's use in scientific writing remain unclear, although some publishers are beginning to lay down policies."[39] Because of AI's ability to fabricate research undetected, the use of AI in the field of research will make determining the originality of research more difficult and require new policies regulating its use in the future.

Given the ability of AI generated language to pass as real scientific research in some cases, AI hallucinations present problems for the application of language models in the Academic and Scientific fields of research due to their ability to be undetectable when presented to real researchers. The high likelihood of returning non-existent reference material and incorrect information may require limitations to be put in place regarding these language models. Some say that rather than hallucinations, these events are more akin to "fabrications" and "falsifications" and that the use of these language models presents a risk to the integrity of the field as a whole.[40]

Terminologies

In Salon, statistician Gary N. Smith argues that LLMs "do not understand what words mean" and consequently that the term "hallucination" unreasonably anthropomorphizes the machine.[41] Journalist Benj Edwards, in Ars Technica, writes that the term "hallucination" is controversial, but that some form of metaphor remains necessary; Edwards suggests "confabulation" as an analogy for processes that involve "creative gap-filling".[1]

A list of use of the term "hallucination", definitions or characterizations in the context of LLMs include:

  • "a tendency to invent facts in moments of uncertainty" (OpenAI, May 2023)[42]
  • "a model's logical mistakes" (OpenAI, May 2023)[42]
  • fabricating information entirely, but behaving as if spouting facts (CNBC, May 2023)[42]
  • "making up information" (The Verge, February 2023)[43]

In other artificial intelligence

Шаблон:Multiple image

The concept of "hallucination" is applied more broadly than just natural language processing. A confident response from any AI that seems unjustified by the training data can be labeled a hallucination.[5]

Various researchers cited by Wired have classified adversarial hallucinations as a high-dimensional statistical phenomenon, or have attributed hallucinations to insufficient training data. Some researchers believe that some "incorrect" AI responses classified by humans as "hallucinations" in the case of object detection may in fact be justified by the training data, or even that an AI may be giving the "correct" answer that the human reviewers are failing to see. For example, an adversarial image that looks, to a human, like an ordinary image of a dog, may in fact be seen by the AI to contain tiny patterns that (in authentic images) would only appear when viewing a cat. The AI is detecting real-world visual patterns that humans are insensitive to.[44]

Wired noted in 2018 that, despite no recorded attacks "in the wild" (that is, outside of proof-of-concept attacks by researchers), there was "little dispute" that consumer gadgets, and systems such as automated driving, were susceptible to adversarial attacks that could cause AI to hallucinate. Examples included a stop sign rendered invisible to computer vision; an audio clip engineered to sound innocuous to humans, but that software transcribed as "evil dot com"; and an image of two men on skis, that Google Cloud Vision identified as 91% likely to be "a dog".[45] However, these findings have been challenged by other researchers.[46] For example, it was objected that the models can be biased towards superficial statistics, leading adversarial training to not be robust in real-world scenarios.[46]

Mitigation methods

The hallucination phenomenon is still not completely understood.[5] Therefore, there is still ongoing research to try to mitigate its occurrence.[47] Particularly, it was shown that language models not only hallucinate but also amplify hallucinations, even for those which were designed to alleviate this issue.[48] Researchers have proposed a variety of mitigation measures, including getting different chatbots to debate one another until they reach consensus on an answer.[49] Another approach proposes to actively validate the correctness corresponding to the low-confidence generation of the model using web search results.[50] Nvidia Guardrails, launched in 2023, can be configured to block LLM responses that don't pass fact-checking from a second LLM.[51]

See also

Шаблон:Div col

Шаблон:Div col end

References

Шаблон:Reflist

Шаблон:Differentiable computing Шаблон:Authority control

  1. 1,0 1,1 1,2 Шаблон:Cite news
  2. Шаблон:Cite web
  3. Шаблон:Cite web
  4. Шаблон:Cite conference
  5. 5,0 5,1 5,2 5,3 5,4 5,5 5,6 5,7 Шаблон:Cite journal
  6. 6,0 6,1 Шаблон:Cite news
  7. Шаблон:Cite arXiv
  8. Шаблон:Cite news
  9. 9,0 9,1 9,2 Шаблон:Cite news
  10. Шаблон:Cite news
  11. Шаблон:Cite journal
  12. Шаблон:Cite arXiv
  13. Шаблон:Cite conference
  14. Шаблон:Cite journal
  15. Шаблон:Cite journal
  16. Шаблон:Cite news
  17. Шаблон:Cite web
  18. Шаблон:Cite news
  19. Шаблон:Cite news
  20. Шаблон:Cite news
  21. Шаблон:Cite news
  22. Шаблон:Cite news
  23. Шаблон:Cite news
  24. Шаблон:Cite news
  25. Шаблон:Cite news
  26. Шаблон:Cite news
  27. Шаблон:Cite web
  28. Шаблон:Cite news
  29. Шаблон:Cite news
  30. Шаблон:Cite news
  31. Шаблон:Cite web
  32. Шаблон:Cite news
  33. Шаблон:Cite news
  34. Шаблон:Cite journal
  35. 35,0 35,1 Шаблон:Cite journal
  36. Шаблон:Cite conference
  37. Шаблон:Cite journal
  38. Шаблон:Cite journal
  39. Шаблон:Cite journal
  40. Шаблон:Cite journal
  41. Шаблон:Cite news
  42. 42,0 42,1 42,2 Шаблон:Cite news
  43. Шаблон:Cite news
  44. Шаблон:Cite magazine
  45. Шаблон:Cite magazine
  46. 46,0 46,1 Шаблон:Cite journal
  47. Шаблон:Cite journal
  48. Шаблон:Cite book
  49. Шаблон:Cite news
  50. Шаблон:Cite arXiv
  51. Шаблон:Cite news