Английская Википедия:Anthropic

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Infobox company

Anthropic PBC is an American artificial intelligence (AI) startup company, founded by former members of OpenAI.[1][2] Anthropic develops general AI systems and large language models.[3] It is a public-benefit corporation, and has been connected to the effective altruism movement.

Шаблон:As of, Anthropic had raised Шаблон:US$Шаблон:Nbspbillion in funding. In September, Amazon announced an investment of up to US$4 billion, followed by a $2 billion commitment from Google the following month.[4][5]

History

Файл:Dario Amodei in 2023.jpg
Dario Amodei, Anthropic co-founder

Anthropic was founded in 2021 by former senior members of OpenAI, siblings Daniela and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research.[6][7][8] The Amodei siblings were among those who left OpenAI due to directional differences, specifically regarding OpenAI's ventures with Microsoft in 2019.[9]

By late 2022, Anthropic had raised Шаблон:US$Шаблон:Nbspmillion in funding, out of which Шаблон:US$Шаблон:Nbspmillion came from Alameda Research. Google's cloud division followed with an investment of Шаблон:US$Шаблон:Nbspmillion for a 10% stake, in a deal requiring Anthropic to buy computing resources from Google Cloud.[10][11] In May 2023, Anthropic raised Шаблон:US$Шаблон:Nbspmillion in a round led by Spark Capital.[12]

In February 2023, Anthropic was sued by Texas-based Anthrop LLC for use of their registered trademark "Anthropic A.I."[13]

Kevin Roose of The New York Times described the company as the "Center of A.I. Doomerism". He reported that some employees "compared themselves to modern-day Robert Oppenheimers".[14]

Journalists often connect Anthropic with the effective altruism movement; some founders and team members were part of the community or at least interested in it. One of the investors of Series B round was Sam Bankman-Fried of the cryptocurrency exchange FTX that collapsed in 2022.[14][15]

On September 25, 2023, Amazon announced a partnership, with Amazon becoming a minority stakeholder by investing up to US$4 billion, including an immediate investment of $1.25bn. As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and planned to make its AI models available to AWS customers.[4][16] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5bn over time.[5]

Projects

Claude

Шаблон:Main Comprising former researchers involved in OpenAI's GPT-2 and GPT-3 model development,[14] Anthropic embarked on the development on its own AI chatbot, named Claude.[17] Similar to ChatGPT, Claude uses a messaging interface where users can submit questions or requests and receive highly detailed and relevant responses.[18]

Initially available in closed beta through a Slack integration, Claude is now accessible via a website claude.ai.

The name, "Claude", was chosen either as a reference to Claude Shannon, or as "a friendly, male-gendered name designed to counterbalance the female-gendered names (Alexa, Siri, Cortana) that other tech companies gave their A.I. assistants".[14]

Claude 2 was launched in July 2023, and initially was available only in the US and the UK. The Guardian reported that safety was a priority during the model training. Anthropic calls their safety method "Constitutional AI":[19]

The chatbot is trained on principles taken from documents including the 1948 Universal Declaration of Human Rights and Apple’s terms of service, which cover modern issues such as data privacy and impersonation. One example of a Claude 2 principle based on the 1948 UN declaration is: “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.”

Claude 2.1 has been released in November 2023.[20]

In the same month, research conducted by Patronus AI, an artificial intelligence startup company, compared performance of Claude2, OpenAI's GPT-4 and GPT-4-Turbo, and Meta AI's LLaMA-2 on two versions of a 150-question test about information in SEC filings (e.g. Form 10-K, Form 10-Q, Form 8-K, earnings reports, earnings call transcripts) submitted by public companies to the agency where one version of the test required the generative AI models to use a retrieval system to locate the specific SEC filing to answer the questions while the other version provided the specific SEC filing to the models to answer the question (i.e. in a long context window). On the retrieval system version, GPT-4-Turbo and LLaMA-2 both failed to produce correct answers to 81% of the questions, while on the long context window version, GPT-4-Turbo and Claude-2 failed to produce correct answers to 21% and 24% of the questions respectively.[21][22]

Constitutional AI

Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest. CAI does this by defining a "constitution" for the AI that consists of a set of high-level normative principles that describe the desired behavior of the AI. These principles are then used to train the AI to avoid harm, respect preferences, and provide true information.[23]

Interpretability research

Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture.[24][25]

See also

References

Шаблон:Reflist

External links

Шаблон:Differentiable computing