Английская Википедия:AI boom
Шаблон:Short description Шаблон:Update Шаблон:Use mdy dates Шаблон:Image frame
The AI boom,[1][2] or AI spring,[3][4] is the ongoing period of rapid and unprecedented progress in the field of artificial intelligence starting from the mid-2010s. Prominent examples include protein folding prediction and generative AI led by laboratories Google DeepMind and OpenAI, respectively. Progress has also been made in drug development.[5]
The AI boom has had or is expected to have a profound cultural, philosophical,[6] religious,[7] economic,[8] and social impact,[9] as questions such as AI alignment,[10] qualia,[6] and the development of artificial general intelligence[10] became widely prominent topics of popular discussion.[11]
History
Generative AI was a key component of this boom, which began in earnest with the founding of OpenAI in 2016 or 2017.[12] OpenAI's generative AI systems, such as its various GPT models (starting in 2018) and DALL-E (2021), have played a significant role in driving this development.[13][14][15] By 2022, large language models had advanced to the extent that they could feasibly be used for broad chatbot applications; text-to-image-models were at a point where they were almost indiscernible from human-made imagery;[16] and speech synthesis software was able to replicate human speech efficiently.[17] Since late 2022, there has been unprecedented increase in the ubiquity of AI tools.[18] Technologies such as AlphaFold led to advances in protein folding. There was also progress in drug development.[5] Increasing focus was placed on dramatically increasing the human lifespan.[19][20]
In 2021, a Council on Foreign Relations analyst outlined ways that the U.S. could retain its position in artificial intelligence amid concerns over progress made by China.[21] According to an analyst for the Center for Strategic and International Studies, the United States emerged as the leader in 2023, outranking the rest of the world in terms of venture capital funding and the number of AI startups.[22]
AI scientists that have immigrated to the United States played an outsize role in the development of AI technology in the country.[23][24] Many of them were educated in China, prompting debates over national security concerns amid worsening relations between the two countries.[25]
Advances
Scientific
There have been proposals to use AI to advance radical forms of human life extension.[26]
AlphaFold 2 score of more than 90 in CASP's global distance test (GDT) is considered a significant achievement in computational biology[27] and great progress towards a decades-old grand challenge of biology.[28] Nobel Prize winner and structural biologist Venki Ramakrishnan called the result "a stunning advance on the protein folding problem",[27] adding that "It has occurred decades before many people in the field would have predicted. It will be exciting to see the many ways in which it will fundamentally change biological research."[29] AlphaFold 2's success received widespread media attention.[30]
The ability to predict protein structures accurately based on the constituent amino acid sequence is expected to have a wide variety of benefits in the life sciences space including accelerating advanced drug discovery and enabling better understanding of diseases.[28][31] Writing about the event, the MIT Technology Review noted that the AI had "solved a fifty-year old grand challenge of biology."[32] It went on to note that the AI algorithm could "predict the shape of proteins to within the width of an atom."[32]
Large language models
GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text that can be hard to determine whether it was written by a human.[33] An upgraded version called GPT-3.5 was used in ChatGPT, which later garnered attention for its detailed responses and articulate answers across many domains of knowledge.[34] A new version called GPT-4 was released on March 14, 2023, and was used in the Microsoft Bing search engine.[35][36] Other language models have been released such as PaLM by Google and LLaMA by Meta Platforms.
In January 2023, DeepL Write, an AI-based tool to improve monolingual texts, was released.[37] In December 2023, Gemini, the latest model by Google, was unveiled, claiming to beat previous state-of-the-art-model GPT-4 on most benchmarks.[38]
Text-to-image models
One of the first text-to-image models to capture widespread public attention was OpenAI's DALL-E, a transformer system announced in January 2021.[39] A successor capable of generating complex and realistic images, DALL-E 2, was unveiled in April 2022,[40] followed by Stable Diffusion, an open-source alternative, releasing in August 2022.[41]
Following other text-to-image models, language model-powered text-to-video platforms such as DAMO,[42] Make-A-Video,[43] Imagen Video[44] and Phenaki[45] can generate video from text and/or text/image prompts.[46]
Speech synthesis
15.ai was one of the first publicly available speech synthesis software that allowed people to generate natural emotive high-fidelity text-to-speech voices from an assortment of fictional characters from a variety of media sources. It was first released in March 2020.[47][48] ElevenLabs unveiled a website where users are able to upload voice samples to that allowed it to generate voices from them. The company was criticized after users were able to abuse its software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals[49] and raised concerns that it could be used to generate deepfakes that were more convincing.[50] An unofficial song created using the voices of musicians Drake and The Weeknd in speech synthesis software raised questions about the ethics and legality of similar software.[51]
Impact
Cultural
During the AI boom, there emerged differing factions. These include the effective accelerationists, effective altruists, and catastrophists.[52]
Reaction
Шаблон:Artificial intelligenceMany experts have stated that the AI boom has started an arms race, where the largest companies are competing against each other to have the most powerful AI model on the market with little concern about safety.[53] During the AI Boom there have been numerous safety concerns expressed by experts.[54] In particular, there are concerns about the most powerful models being developed with speed and profits in mind over safety and the protection of users.[53] There has already been a significant number of reports about racist, sexist, homophobic, and other discriminatory acts committed by ChatGPT, Microsoft's Tay, and leading AI facial recognition models.[55] There are only 80 to 120 researchers in the world[55] working to understand how to ensure AI's are aligned with human values, and with incomplete understanding about how AI works,[55] many researchers around the globe have voiced concerns about potential future implications of the AI boom.[54] Public reaction to the AI boom has been mixed, with some parties hailing the new possibilities that AI creates,[56] its potential for benefiting humanity, and sophistication, while other parties denounced it for threatening job security, being 'uncanny' in its responses, and for giving flawed responses based on the programming.[57][58][59][60]
In the midst of the AI boom, the hype surrounding artificial intelligence can pose significant dangers. The enthusiasm and pressure generated by public fascination with AI can drive developers to expedite the creation and deployment of AI systems. This rush may lead to the omission of crucial safety procedures, potentially resulting in serious existential risks. As noted by Holden Karnofsky in his article, “What AI companies can do today to help with the most important century”,[61] the imperative need to meet consumer expectations might tempt organizations to prioritize speed over thorough safety checks, thus jeopardizing the responsible development of AI.
The prevailing AI race mindset heightens the risks associated with AGI (Artificial General Intelligence) development.[62] While competition fosters innovation and progress, this intense race to outperform rivals may encourage a short-term focus, pushing organizations to prioritize immediate gains over long-term safety.[63] The "winner-take-all" mentality further incentivizes cutting corners, creating a race to the bottom, potentially compromising ethical considerations and responsible AI development.[63]
Prominent voices in the AI community advocate for a cautious approach, urging AI companies to avoid this unnecessary hype and acceleration.[61] Concerns arise from the belief that too much money pouring into the AI sector too rapidly could lead to incautious companies racing to develop transformative AI without due consideration for key risks.[63][61] Despite the prevailing hype and investment in AI, some argue that it is not too late to mitigate the risks associated with acceleration. Advocates for caution stress the importance of raising awareness about key risks, investing in AI safety measures, such as alignment research, standards, monitoring, and strong security procedures.[61]
See also
- AI winter, a period of reduced funding and interest in artificial intelligence research
- History of artificial intelligence
- History of artificial neural networks
- Hype cycle
- Technological singularity
References
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ 5,0 5,1 Шаблон:Cite web
- ↑ 6,0 6,1 Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite journal
- ↑ 10,0 10,1 Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite journal
- ↑ 27,0 27,1 Robert F. Service, 'The game has changed.' AI triumphs at solving protein structures, Science, 30 November 2020
- ↑ 28,0 28,1 Шаблон:Cite journal
- ↑ Шаблон:Cite web
- ↑ Brigitte Nerlich, Protein folding and science communication: Between hype and humility, University of Nottingham blog, 4 December 2020
- ↑ Tim Hubbard, The secret of life, part 2: the solution of the protein folding problem., medium.com, 30 November 2020
- ↑ 32,0 32,1 Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite magazine
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite news
- ↑ 53,0 53,1 CHOW, A.R. et al. (2023) ‘The Ai Arms Race Is Changing Everything’, TIME Magazine, 201(7/8), pp. 50–54.
- ↑ 54,0 54,1 Anderljung, M., Barnhart, J., Korinek, A., Leung, J., O’Keefe, C., Whittlestone, J., Avin, S., Brundage, M., Bullock, J., Cass-Beggs, D., Chang, B., Collins, T., Fist, T., Hadfield, G., Hayes, A., Ho, L., Hooker, S., Horvitz, E., Kolt, N., … Wolf, K. (2023). Frontier AI Regulation: Managing Emerging Risks to Public Safety.
- ↑ 55,0 55,1 55,2 Scharre, Paul. "Killer Apps." Foreign Affairs, 16 April 2019, https://www.foreignaffairs.com/articles/2019-04-16/killer-apps. Accessed 30 November 2023.
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ 61,0 61,1 61,2 61,3 Karnofsky, Holden. "What AI Companies Can Do Today to Help With the Most Important Century." Cold Takes. Accessed 8 December 2023, https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/.
- ↑ "Nobody's on the Ball on AGI Alignment." LessWrong. Accessed 8 December 2023, https://www.lesswrong.com/posts/uqTJ7mQqRpPejqbfN/nobody-s-on-the-ball-on-agi-alignment.
- ↑ 63,0 63,1 63,2 "Global Vulnerability and the AI Race." AI Safety Fundamentals. Accessed 8 December 2023, https://aisafetyfundamentals.com/blog/global-vulnerability.