On March 13, 2024, the European Parliament gave the green light to the AI Act by a majority vote. The EU AI Act introduces groundbreaking regulations, establishing the world’s first major regulatory framework for artificial intelligence within Europe. It’s set to revolutionize how businesses and organizations across various sectors use AI, emphasizing transparency and setting clear boundaries for high-risk applications.
The EU’s regulation on artificial intelligence (EU AI Act) is part of a broader effort to protect society from the potential downsides of emerging technologies. It also gives companies a way to safeguard their brands and financial interests in the light of tech-based exploitation.
Our team at Unicsoft has first-hand experience with AI compliance and development challenges. This article summarizes the key ideas in our recent webinar on the EU AI Act. It shows how your business can continue prioritizing innovation while ensuring regulatory compliance and guarding against AI-related ethical breaches, whether in the EU or elsewhere.
If you’re unfamiliar with the EU Artificial Intelligence Act
If you haven’t heard of the EU AI Act, you’ll probably hear about it soon.
The legislation, approved by a plenary vote in the European Parliament, stems from a proposal initiated in 2021. With 523 EU lawmakers voting in favor, 46 against, and 49 abstentions, the far-reaching regulation is expected to be formally endorsed by EU countries in May. It is anticipated to come into force early next year, applying in 2026, although certain provisions will take effect sooner.
This EU regulation on artificial intelligence marks the world’s first concrete step toward governing AI. By establishing standardized rules for the development, marketing, and use of AI in the EU, the Act aims to ensure that AI systems are safe and respect fundamental human rights and values. Moreover, the Act seeks to promote investment and innovation in AI, strengthen governance and enforcement mechanisms, and foster a unified EU market for AI.
Am I affected by the EU regulation on artificial intelligence?
The EU AI Act has set out clear definitions for the different actors involved in AI: providers, deployers, importers, distributors, and product manufacturers.
This means all parties involved in the development, usage, import, distribution, or manufacturing of AI models in the EU will be held accountable. The AI Act also applies to providers and users of AI systems located outside of the EU if the output produced by the system is intended to be used within the EU.
Advice for businesses launching or upgrading AI products
The EU AI Act adjusts regulations based on four AI system risk levels, imposing lighter obligations on systems with limited risk and stricter requirements and restrictions on high-risk AI systems.
Since compliance hinges on accurately assessing the level of an AI system, let’s explore what each risk level entails:
-
Unacceptable risk level
- AI systems with a high likelihood of causing serious harm.
- Let’s delve into these systems further in the next chapter.
-
High-risk level
- Have the potential to cause significant harm or impact fundamental rights.
- Must undergo stringent regulation and monitoring due to their critical nature.
- Mainly concerns critical sectors like healthcare, transport, and energy.
- Examples include solutions for medical diagnosis and treatment, autonomous vehicles, energy infrastructure control, and financial risk assessment.
-
Limited risk level
- AI systems, where the potential harm is moderate and manageable with appropriate safeguards.
- Transparency obligations are crucial to ensure users know they are interacting with AI systems or AI-created content.
- Examples include deep fakes, AI systems in human resources, and content recommendation algorithms.
-
Minimal/no risk level
- Solutions from this category are unlikely to cause harm or significantly impact individuals or society.
- Comprises basic software tools and non-critical applications primarily designed for entertainment, efficiency, optimization, and productivity.
- Generative AI applications, except those related to critical infrastructure, may fall into this category.
You can use our free AI Risk Scanner to assess which risk category your AI-supported application or business activity falls into.
Making compliance for new AI apps easier
One of the biggest challenges companies face is the cost and effort involved in compliance, as adapting existing AI systems to meet the new standards for AI regulation in Europe is resource-intensive.
In response, Unicsoft has developed its own AI adapter software to streamline the development of custom AI applications and ensure that applications can scale and adapt to new requirements.
In our webinar, we explain how this sophisticated software framework seamlessly integrates multiple AI applications with diverse business systems. Thanks to the framework:
- Developers can easily create AI applications of varying complexity and scale, enabling the implementation of diverse business workflows.
- Compliance becomes more manageable.
- Additional features such as data analytics models, forecasting models, text/voice/image/video processing, and recommendation models can be seamlessly incorporated.
Setting rules for healthcare
Unicsoft is actively involved in the development of healthcare applications, which is why in this webinar we reference the World Health Organization (WHO) guidelines for AI that closely align with EU AI Act requirements. These guidelines focus on the unique challenges associated with AI in healthcare, including:
- Storing and securing vast amounts of mostly protected health information
- Protecting training data for AI models from malicious actors
- Eliminating bias that can infiltrate AI algorithms due to insufficient data on gender, sexual orientation, race, and ethnicity
Healthcare organizations and third-party vendors for AI solutions must prioritize technological safety and ethics in their AI integration strategies. As a result, the EU AI Act is poised to become a beacon of safety in healthtech.
What is the price of non-compliance?
If you do not comply with AI regulations in Europe, the penalties are substantial and scale with the severity of the infringement.
- Fines start from €7.5 million or 1.5% of a company’s total worldwide annual turnover, whichever is higher.
- Engaging in a prohibited AI practice can lead to fines of up to €35 million or 7% of the total worldwide annual turnover, whichever is higher.
- Enforcement is primarily conducted by national market surveillance authorities.
We expect the AI Act to be published in mid-2024. It will enter into force 20 days after publication in the Official Journal of the EU, with most provisions applying after 24 months. Rules regarding prohibited AI systems will take effect after six months, rules on the Global Partnership on Artificial Intelligence (GPAI) after 12 months, and rules on high-risk AI systems after 36 months.
While the clock is ticking, you can devise a strategy to align with EU regulation on artificial intelligence. Watch our full webinar for a comprehensive overview of the Act, insights into its implications, and practical guidance from Unicsoft experts.