In August this year, a new European Union regulation on artificial intelligence (AI) came into effect.
The measures are designed to ensure that AI systems find a balance between innovation and citizens’ rights.
These first-in-the-world regulations will be rolled-out gradually, with most of the Act becoming coming into force from August 2026.
The AI Act will regulate the use and development of AI systems within the EU and will oversee the actions of businesses within the AI value chain, including those who create, distribute and import.
The Act defines 4 risk categories;
- Unacceptable
- High
- Limited
- Minimal
Unacceptable risk covers areas that pose a clear threat to safety, livelihoods or rights and are banned entirely.
The High Risk category includes AI systems with significant implications for individual rights or public safety, e.g. those used in critical infrastructure, education, employment and law enforcement.
The Limited Risk group covers AI applications that involve some level of interaction with users, such as chatbots.
Minimal Risk: Most AI applications are most likely to fall within this category, where it is envisaged that the majority of AI development and use will take place
Fines for Non-Compliance:
Companies who breach could face penalties of up to €35 million or 7% of their total worldwide annual turnover.
Industry Reaction:
The regulations have been met with some some industry concern. Open AI’s chief executive has previously said that his company might ceases operating in the European Union, if it deemed compliance would be too costly and Meta has delayed the rollout of its AI offering in Europe due to similar concerns.
Meta had assumed it had the right to enable its AI user data collection models on Facebook and Instagram but this has been challenged by a number of European countries and also by Ireland’s Data Protection Commission