Английская Википедия:Artificial Intelligence Act

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Use dmy dates Шаблон:Infobox EU legislation

The Artificial Intelligence Act (AI Act) is a proposed European Union regulation on artificial intelligence in the European Union. Proposed by the European Commission on 21 April 2021[1] and not yet enacted,[2] it would introduce a common regulatory and legal framework for artificial intelligence.[3]

Its scope would encompass all types of artificial intelligence in a broad range of sectors (exceptions include AI systems used solely for military, national security, research, and non-professional purpose[4]). As a piece of product regulation, it would not confer rights on individuals, but would regulate the providers of artificial intelligence systems, and entities making use of them in a professional capacity.[5]

The AI Act was revised following the rise in popularity of generative AI systems such as ChatGPT, whose general-purpose capabilities present different stakes and did not fit the defined framework.[6] More restrictive regulations are planned for powerful generative AI systems with systemic impact.[7]

The proposed EU Artificial Intelligence Act aims to classify and regulate artificial intelligence applications based on their risk to cause harm. This classification includes four categories of risk ("unacceptable", "high", "limited" and "minimal"), plus one additional category for general-purpose AI. Applications deemed to represent unacceptable risks are banned. High-risk ones must comply to security, transparency and quality obligations and undergo conformity assessments. Limited-risk AI applications only have transparency obligations, and those representing minimal risks are not regulated. For general-purpose AI, transparency requirements are imposed, with additional and thorough evaluations when representing particularly high risks.[7][8]

The Act further proposes the introduction of a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation.[9]

The AI Act is expected to have a large impact on the economy. Like the European Union's General Data Protection Regulation, it can apply extraterritorially to providers from outside the EU, if they have products within the EU.[5]

Risk categories

There are different risk categories depending on the type of application, and one specifically dedicated to general-purpose generative AI :

  • Unacceptable risk : AI applications that fall under this category are banned. This includes AI applications that manipulate human behaviour, those that use real-time remote biometric identification (including facial recognition) in public spaces, and those used for social scoring (ranking people based on their personal characteristics, socio-economic status or behaviour).[8]
  • High-risk : the AI applications that pose significant threats to health, safety, or the fundamental rights of persons. Notably, AI systems used in health, education, recruitment, critical infrastructure management, law enforcement or justice. They are subject to obligations of quality, transparency, human supervision and security. They must be evaluated before they are placed on the market, as well as during their life cycle.[8] The list of high-risk applications can be expanded without requiring to modify the AI Act itself.[5]
  • General-purpose AI ("GPAI") : this category was added in 2023, and includes in particular foundation models like ChatGPT. They are subject to transparency requirements. High-impact general-purpose AI systems which could pose systemic risks (notably those trained using a computation capability of more than 1025 FLOPS[10]) must also undergo a thorough evaluation process.[8]
  • Limited risk : these systems are subject to transparency obligations aimed at informing users that they are interacting with an artificial intelligence system and allowing them to exercise their choices. This category includes, for example, AI applications that make it possible to generate or manipulate images, sound or videos (like deepfakes).[8] In this category, free and open-source models whose parameters are publicly available are not regulated, with some exceptions.[10][11]
  • Minimal risk : this includes for example AI systems used for video games or spam filters. Most AI applications are expected to be in this category.[12] They are not regulated, and Member States are prevented from further regulating them via maximum harmonisation. Existing national laws related to the design or use of such systems are disapplied. However, a voluntary code of conduct is suggested.[13]

Enforcement

The Act regulates the entry to the EU internal market. To this extent it uses the New Legislative Framework, which can be traced back to the New Approach which dates back to 1985. How this works is as follows: The EU legislator creates the AI-act, this Act contains the most important provisions that all AI-systems that want access to the EU internal market will have to comply with. These requirements are called 'essential requirements'. Under the New Legislative Framework, these essential requirements are passed on to European Standardisation Organisations who draw up technical standards that further specify the essential requirements.[14]

As mentioned above, the Act requires that member states set up their own notifying bodies. Conformity assessments should take place in order to check whether AI-systems indeed conform to the standards as set out in the AI-Act.[15] This conformity assessment is either done by self-assessment, which means that the provider of the AI-system checks for conformity themselves, or this is done through third party conformity assessment which means that the notifying body will carry out the assessment.[16] Notifying bodies do retain the possibility to carry out audits to check whether conformity assessment is carried out properly.[17]

Under the current proposal it seems to be the case that many high-risk AI-systems do not require third party conformity assessment which is critiqued by some.[17][18][19][20] These critiques are based on the fact that high-risk AI-systems should be assessed by an independent third party to fully secure its safety.

Timeline

In February 2020, the European Commission publishes "White Paper on Artificial Intelligence – A European approach to excellence and trust".[21] In October 2020, a debates between EU leaders take place. On 21 April 2021, the AI Act is officially proposed. On 6 December 2022, the European Council adopts the general orientation, which allows negotiations to begin with the European Parliament. On 9 December 2023, after three days of “marathon" talks, the Council and Parliament concluded an agreement.[22]

The AI Act is unlikely to take effect before 2025.[2] It's applicability will be progressive. AI applications deemed to present "unacceptable" risks should be banned 6 months after entry into force and provisions for general-purpose AI should become applicable 12 months after entry into force, and the AI Act should be fully applicable 24 months after entry into force.[23]

See also

References

External links

Шаблон:Authority control

Шаблон:Existential risk from artificial intelligence