Английская Википедия:AI Now Institute
The AI Now Institute (AI Now) is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry.[1][2][3] AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission.[4] Its executive director is Amba Kak.[5][6]
Founding and mission
AI Now grew out of a 2016 symposium spearheaded by the Obama White House Office of Science and Technology Policy. The event was led by Meredith Whittaker, the founder of Google's Open Research Group, and Kate Crawford, a principal researcher at Microsoft Research.[7] The event focused on near-term implications of AI in social domains: Inequality, Labor, Ethics, and Healthcare.[8]
In November 2017, AI Now held a second symposium on AI and social issues, and publicly launched the AI Now Institute in partnership with New York University.[7] It is claimed to be the first university research institute focused on the social implications of AI, and the first AI institute founded and led by women.[9] It is now a fully independent institute.
In an interview with NPR, Crawford stated that the motivation for founding AI Now was that the application of AI into social domains - such as health care, education, and criminal justice - was being treated as a purely technical problem. The goal of AI Now's research is to treat these as social problems first, and bring in domain experts in areas like sociology, law, and history to study the implications of AI.[10]
Research
AI Now publishes an annual reports on the state of AI, and its integration into society. Its 2017 Report stated that, "current framings of AI ethics are failing", and provided ten strategic recommendations for the field - including pre-release trials of AI systems, and increased research into bias and diversity in the field. The report was noted for calling for an end to "black box" systems in core social domains, such as those responsible for criminal justice, healthcare, welfare, and education.[11][12][13]
In April 2018, AI Now released a framework for algorithmic impact assessments (AIA Report Шаблон:Webarchive), as a way for governments to assess the use of AI in public agencies. According to AI Now, an AIA would be similar to environmental impact assessment, in that it would require public disclosure and access for external experts to evaluate the effects of an AI system, and any unintended consequences. This would allow systems to be vetted for issues like biased outcomes or skewed training data, which researchers have already identified in algorithmic systems deployed across the country.[14][15][16]
Its 2023 Report[17] argued that meaningful reform of the tech sector must focus on addressing concentrated power in the tech industry.[18]
References
See also
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ 7,0 7,1 Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокTandon
не указан текст - ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- Английская Википедия
- Research institutes in New York (state)
- 2017 establishments in New York City
- Artificial intelligence conferences
- Data activism
- Ethics of science and technology
- Страницы, где используется шаблон "Навигационная таблица/Телепорт"
- Страницы с телепортом
- Википедия
- Статья из Википедии
- Статья из Английской Википедии
- Страницы с ошибками в примечаниях