The EU AI Act is a regulatory framework introduced by the European Union to ensure the safe, ethical, and transparent use of artificial intelligence (AI) across its member states. It is the first law of its kind, and it aims to balance the promotion of innovation with the protection of fundamental rights and public safety.
In this blog post, we will answer frequently asked questions about the EU AI Act.
There are six features of the EU AI Act we’ll tackle—risk classification, prohibited practices, transparency and accountability, AI governance, penalties, and support for innovation.
The Act outright bans certain AI applications that are deemed harmful or manipulative. This includes AI systems that exploit vulnerabilities of individuals (e.g., children) or deploy subliminal techniques to manipulate behavior.
A European Artificial Intelligence Board has been established to monitor the implementation of the law and provide guidance to member states.
The Act has established fines for non-compliance, with penalties reaching up to 6% of a company’s global annual turnover.
The EU AI Act encourages the creation of “regulatory sandboxes” where companies can test innovative AI solutions under regulatory supervision without facing immediate legal consequences.
The EU AI Act entered into force on August 1, 2024. Below is a chart with key dates.
*Systems already on the market need to comply if there are significant changes in design or use by public authorities.
**These AI systems are components of large-scale IT systems in areas like freedom, security, and justice.
The EU AI Act is expected to influence global AI regulations, particularly for companies operating in the European market.
Businesses that develop or use AI systems, especially those engaged in high-risk sectors, must adhere to stringent rules. While the Act could increase compliance costs for businesses, it will also foster trust and safety.
For consumers, the Act aims to protect users by banning harmful AI applications and ensuring transparency and accountability in AI-driven decisions.
Overall, the EU AI Act seeks to strike a balance between encouraging AI innovation and ensuring that AI technologies are safe, transparent, and aligned with fundamental rights.
Specifically for the risk classification feature of the Act, Bugcrowd offers AI bias assessments and AI penetration testing assessments. These two service offerings can help organizations achieve compliance with these regulations.
The EU AI Act is the first of its kind; it constitutes a step toward strengthening privacy, security and ethics in relation to the use of AI in the European market. Specifically, it aims to create a framework for the safe and ethical development, deployment, and use of AI. Many expect it to set a global precedent for regulating AI while balancing innovation and ethical concerns.
If you’re interested in other EU-based compliance blogs, check out our DORA series and our NIS2 Directive deep dive.