Английская Википедия:Future of Life Institute

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Distinguish Шаблон:Coord Шаблон:Infobox organization

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

The founders of the institute include cosmologist Max Tegmark professor at the Massachusetts Institute of Technology (MIT), Anthony Aguirre, cosmologist and the Faggin Presidential Chair for the Physics of Information at the University of California, Santa Cruz, and Skype co-founder Jaan Tallinn, and its advisors include entrepreneur Elon Musk.

Purpose

Файл:Max Tegmark.jpg
Max Tegmark, professor at MIT, one of the founders and current president of the Future of Life Institute

FLI's stated mission is to steer transformative technology towards benefiting life and away from large-scale risks.[1] FLI's philosophy focuses on the potential risk to humanity from the development of human-level or superintelligent artificial general intelligence (AGI), but also works to mitigate risk from biotechnology, nuclear weapons and global warming.[2]

History

FLI was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Tufts University postdoctoral scholar Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. The Institute's advisors include computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, astrophysicist Sandra Faber, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman (as well as cosmologist Stephen Hawking prior to his death in 2018).[3][4][5]

Starting in 2017, FLI has offered an annual "Future of Life Award", with the first awardee being Vasili Arkhipov. The same year, FLI released Slaughterbots, a short arms-control advocacy film. FLI released a sequel in 2021.[6]

In 2018, FLI drafted a letter calling for "laws against lethal autonomous weapons". Signatories included Elon Musk, Demis Hassabis, Shane Legg, and Mustafa Suleyman.[7]

In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.[8][9] In response, Tegmark said that the institute had only become aware of Nya Dagbladet's positions during due diligence processes a few months after the grant was initially offered, and that the grant had been immediately revoked.[9]

Шаблон:Vanchor titled "Pause Giant AI Experiments: An Open Letter". This called on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than GPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter said: "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control".[10] The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.[11][12]

Prominent signatories of the letter included Elon Musk, Steve Wozniak, Evan Sharp, Chris Larsen, and Gary Marcus; AI lab CEOs Connor Leahy and Emad Mostaque; politician Andrew Yang; deep-learning researcher Yoshua Bengio; and Yuval Noah Harari.[13] Marcus stated "the letter isn't perfect, but the spirit is right." Mostaque stated, "I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter." In contrast, Bengio explicitly endorsed the six-month pause in a press conference.[14][15] Musk claimed that "Leading AGI developers will not heed this warning, but at least it was said."[16] Some signatories, including Musk, claimed to be motivated by fears of existential risk from artificial general intelligence.[17] Some of the other signatories, such as Marcus, instead claimed to have signed out of concern about risks such as AI-generated propaganda.[18]

The authors of one of the papers cited in FLI's letter, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"[19] including Emily M. Bender, Timnit Gebru, and Margaret Mitchell, criticised the letter.[20] Mitchell claimed that “by treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.”[20]

Conferences

In 2014, the Future of Life Institute held its opening event at MIT: a panel discussion on "The Future of Technology: Benefits and Risks", moderated by Alan Alda.[21][22] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[23][24]

Since 2015, FLI has organised biannual conferences with the stated purpose of bringing together AI researchers from academia and industry. Шаблон:As of, the following conferences have taken place:

  • "The Future of AI: Opportunities and Challenges" conference in Puerto Rico (2015). The stated goal was to identify promising research directions that could help maximize the future benefits of AI.[25] At the conference, FLI circulated an open letter on AI safety which was subsequently signed by Stephen Hawking, Elon Musk, and many artificial intelligence researchers.[26]
  • The Beneficial AI conference in Asilomar, California (2017),[27] a private gathering of what The New York Times called "heavy hitters of A.I." (including Yann LeCun, Elon Musk, and Nick Bostrom).[28] The institute released a set of principles for responsible AI development that came out of the discussion at the conference, signed by Yoshua Bengio, Yann LeCun, and many other AI researchers.[29] These principles may have influenced the regulation of artificial intelligence and subsequent initiatives, such as the OECD Principles on Artificial Intelligence.[30]
  • The beneficial AGI conference in Puerto Rico (2019).[31] The stated focus of the meeting was answering long-term questions with the goal of ensuring that artificial general intelligence is beneficial to humanity.[32]

Global research program

The FLI research program started in 2015 with an initial donation of $10 million from Elon Musk.[33][34][35] In this initial round, a total of $7 million was awarded to 37 research projects.[36] In July 2021, FLI announced that it would launch a new $25 million grant program with funding from the Russian–Canadian programmer Vitalik Buterin.[37]

In the media

See also

References

Шаблон:Reflist

External links

Шаблон:Effective altruism Шаблон:Global catastrophic risks Шаблон:Existential risk from artificial intelligence