AI Act in short: a risk-based approach
The AI Act aims to establish rules for the use of artificial intelligence within the European Union. These rules are designed to increase citizens’ and consumers’ trust in AI applications while simultaneously protecting the rights and safety of individuals. The rules apply to both providers and users of AI systems. A risk-based approach has been chosen. This entails classifying AI systems based on various risk levels, considering the extent to which AI systems pose a risk to the health, safety, or fundamental rights of natural persons. The following risk levels are distinguished:- Unacceptable risk: Systems with an unacceptable risk are prohibited. These include systems that violate fundamental rights, such as emotion recognition in the workplace or education, or systems that manipulate human behavior.
- High risk: These systems are allowed provided they meet strict requirements. These systems must comply with legal provisions regarding data and data management, documentation and record-keeping, transparency and information provision to users, human oversight, robustness, accuracy, and security.
- Low/minimal risk: Low-risk systems are allowed but must provide transparency to users, such as in chatbots or deep-fakes.