🎓 Free introductory course "LLM evaluations for AI product teams". Save your seat
Community

AI regulations: EU AI Act, AI Bill of Rights, and more

Last updated:
January 13, 2025

Artificial intelligence (AI) transforms industries and reshapes how we live and work. But as AI systems grow more powerful, how do we ensure they are safe, fair, and trustworthy? Governments worldwide are stepping up to guide how AI-powered products are built, deployed, and used.

The global regulatory landscape is evolving quickly. Some countries enforce strict laws, while others roll out voluntary guidelines. Despite these differences, the core message is that AI must be developed responsibly. This concerns not only foundation models like Google Gemini, Anthropic Claude, or OpenAI GPT-4 but also AI applications built on top of them. 

If you’re developing AI-powered products—from chatbots to hiring assistants to image generation features—compliance with AI regulations is a crucial consideration. And it isn’t just a box to check; it translates into specific requirements that affect how you design, build, test, and monitor your AI systems.

In this guide, we’ll break down key AI regulations across major markets and explain what they mean for teams building AI-powered products. We’ll cover:

  • AI regulations in Europe: the EU AI Act and the requirements it enforces on AI systems depending on the risk they pose.
  • AI regulations in the United States: the Blueprint of AI Bill of Rights, voluntary commitments from the leading AI companies, and state and local laws affecting AI.
  • Global AI regulatory landscape: legal frameworks and guidelines for responsible AI in the United Kingdom, Canada, Brazil, and Japan.

Let’s get started!

[fs-toc-omit]TL;DR

  • Countries tackle AI regulation differently. The EU leads with its comprehensive AI Act that will enforce strict rules for AI systems based on their risk level. The US opts for sector-specific regulations and broader guidance through the Blueprint for AI Bill of Rights. The UK, Canada, Brazil, and Japan have developed their own approaches, ranging from binding laws to voluntary frameworks.
  • Yet beneath these differences lie common principles of responsible AI: systems should operate safely and securely, treat people fairly, and enable clear accountability when things go wrong.
  • This creates new responsibilities for teams building AI models and AI-powered products. To comply, AI builders must document model development, track training data, check for bias, evaluate and stress-test AI systems, monitor their performance in production, and maintain human oversight. 

The EU AI Act

The Artificial Intelligence Act of the European Union—or EU AI Act for short—is the primary legislative framework that regulates the development and use of artificial intelligence in the European Union. It applies to providers and developers of AI systems located within and outside the EU if these systems are intended to be used in the EU. 

The EU AI Act establishes distinct regulations for AI models and downstream AI systems and applications built on top of these models. For example, a large language model (LLM) like GPT-4o is a model, while a support chatbot is an AI system powered by it. In this guide, we’ll focus on regulations for AI systems and applications.

Let’s dive in!

The EU AI Act applies different rules to AI systems based on the risk they pose. Any AI system can be categorized into four possible risk levels.

EU AI Act risk levels of AI systems
A risk-based approach to regulating AI systems under the EU AI Act.

Unacceptable risk. The AI Act bans AI systems that threaten safety or fundamental rights. Examples of prohibited practices include AI systems that manipulate people's behavior or exploit their vulnerabilities, social scoring systems, and systems for real-time remote biometric identification. (Article 5).

High risk. AI systems used in critical areas such as healthcare, education, employment, essential private and public services, and law enforcement are classified as high-risk. (See Article 6 and Annex III). Examples are AI systems used in educational and vocational training to determine admission or evaluate learning outcomes or AI systems used in credit scoring.   

These high-risk systems are subject to regulatory requirements, including:

  • Compliance with existing relevant EU regulations.
  • Risk management system. The AI Act requires evaluating and minimizing potential risks to health, safety, and fundamental rights throughout all stages of AI’s lifecycle. (Article 9).
  • Data governance. High-risk AI systems must be developed using high-quality training, validation, and testing datasets that are relevant, representative, and error-free. (Article 10).
  • Technical documentation. Detailed documentation is a must for high-risk AI systems. It must provide a general description of the AI system, its purpose, design specifications, data requirements, and testing procedures. The docs must also include information about the system's performance, its changelog, and a plan for monitoring its performance in production. (Article 11).
  • Record-keeping. An AI system must be suited to automatically record events (logs) throughout its lifespan. For example, the system should record when it was used, the database it checked data against, and who verified the results. (Article 12).
  • Transparency and provision of information. High-risk AI systems should be transparent and have clear instructions for those deploying them. The instructions must include information about the system’s provider, capabilities, limitations, and potential risks. They should also explain how to interpret the system's outputs and data logs. (Article 13).
  • Human oversight is required. The overseer should be able to interpret the system’s output, decide whether to use it and stop its operation if necessary. (Article 14).
  • AI systems must be accurate, robust, and secure. While ways to measure these qualities are still to be determined, high-risk AI systems must be resilient to errors and attempts to exploit their vulnerabilities. They must also minimize the risk of biased outputs. (Article 15).

This framework considers both providers who make high-risk systems available at the market and deployers who use AI systems inside their organization. For example, deployers must monitor the operation of a high-risk AI system and keep logs. (Article 26).

Limited risk. Certain AI systems where users directly interact with AI trigger transparency obligations. Users must be informed that they interact with an AI system unless it is already obvious from the context. Deployers of emotion recognition systems must also inform users who are exposed to the operation of the system. Apps for manipulating and generating synthetic content like deepfake videos must mark such content as artificially generated. (Article 50).

Minimal risk. Other AI-powered systems, like AI recommender systems or spam filters, have minimal risk and can be deployed without regulatory restrictions. Although these systems are not directly regulated, the Act suggests they follow general AI ethics principles.

Unacceptable risk High risk Limited risk Minimal risk

AI systems

AI systems used for:

  • Social scoring
  • Real-time remote biometric identification and categorization
  • Scraping of facial images
  • Predictive policing
  • Manipulating people's behavior
  • Exploiting people's vulnerabilities

AI systems used in:

  • Critical infrastructure
  • Education and professional training
  • Essential private and public services
  • Employment
  • Law enforcement and administration of justice
  • Migration and border control

Certain AI systems where users directly interact with AI, for example:

  • Chatbots
  • General-purpose generative AI systems
  • Synthetic AI image, audio, or video content

AI systems that are not considered high risk or do not trigger transparency requirements, such as recommender systems and spam filters.

Regulations

Prohibited in the EU

Subject to strict regulations:

  • Compliance with EU regulations
  • Risk management
  • Data governance
  • Documentation
  • Transparency
  • Human oversight
  • Accuracy, robustness, and security

Monitoring of AI systems after deployment is obligatory.

Subject to transparency requirements:

  • Informing users when interacting with AI
  • Marking content as AI-generated

Not subject to regulation. Recommendations to follow AI ethics principles.

Penalties

Up to €35 million or 7% of annual turnover

Up to €15 million or 3% of annual turnover

Up to €15 million or 3% of annual turnover

Not applicable

Regulations come into effect

2 February 2025

2 August 2026

2 August 2025

Not applicable

Table 1. Regulations for AI systems based on their risk level under the EU AI Act as published in August 2024.

The AI Act came into force in August 2024, but its provisions go into effect in stages. For example, prohibited AI practices are banned from February 2025, and the rules for high-risk AI systems take effect from August 2026. The penalties for non-compliance with the AI Act range from €7.5 million to €35 million or 1% to 7% of the company’s global annual turnover.

It’s worth noting that while the EU Act follows the risk-based approach, it also references the non-binding 2019 Ethics guidelines for trustworthy AI. These guidelines encourage all developers of AI systems to follow the seven core principles. They include human agency and oversight, technical robustness and safety (including resilience against attacks), privacy and data governance, transparency (with appropriate traceability), diversity, non-discrimination and fairness, and societal and environmental well-being. 

Other regulations affecting AI systems. General Data Protection Regulations (GDPR) regulate data processing relevant to many AI systems. The updated Product Liability Directive (PLD) classifies AI systems as products, making their developers liable if defective AI-powered products cause harm. The European Commission also proposed an AI Liability Directive (AILD) to complement the PLD and address cases of negligence or failure to meet regulatory requirements for AI systems. 

US AI Bill of Rights

While existing federal laws have limited applications to AI, several frameworks and initiatives guide the development and use of AI systems in the United States. The Blueprint for an AI Bill of Rights is one of them. It was published in 2022 and outlines five principles of protecting Americans’ civil rights in the AI age:

Safe and effective systems. The Blueprint suggests that AI systems should be extensively tested before deployment and continuously monitored when in production. The ongoing monitoring should include evaluating performance metrics and testing the system’s outputs.

Algorithmic discrimination protection. Algorithmic discrimination occurs when AI systems contribute to unjustified treatment or disfavor people based on their race, sex, religion, age, disability, or other characteristics protected by law. To prevent it, developers of AI systems should use representative datasets and conduct equity assessments and disparity testing.

Data privacy. According to the Blueprint, individuals should be protected from privacy violations and have agency over how their data is used. To achieve that, developers of AI systems should collect personal information only if it is strictly necessary and ensure that consent requests are brief and clear. Enhanced data protection should be implemented in sensitive domains, such as health, employment, education, criminal justice, and finance. 

Notice and explanation. Developers and deployers of AI systems should provide “accessible plain language documentation” that describes the general purpose of the AI system and its functions. They should also notify users that the AI system is in use and clearly explain its outputs. 

Human alternatives, consideration, and fallback. The Blueprint suggests that users should be able to opt out of AI systems in favor of a human alternative, e.g., switch from a chatbot to a human agent. If the AI system fails or a user wants to appeal, fallback and escalation processes must be in place. 

While the Blueprint for an AI Bill of Rights is not an enforceable law, it encourages companies to account for AI observability and data quality so users can interact with more reliable and secure AI products and applications. 

[fs-toc-omit]Other AI guidelines and initiatives in the US

The Executive Order on Safe, Secure, and Trustworthy AI outlines a framework to address the risks posed by AI systems while ensuring their development. Among other things, it requires developers of high-impact AI systems to share red-teaming results with the government. It also requires federal agencies to develop testing standards to ensure AI safety before public release and establish best practices for detecting AI-generated content. 

There are also several state and local regulations for AI-powered products. For example:

  • California obligates companies that use chatbots to disclose that users communicate with a robot, not a human. 
  • Tennessee's ELVIS Act bans unauthorized AI simulation of individuals' voices. 
  • New York City's local law prohibits using automated employment decision tools unless they have been independently audited for bias. 

Voluntary commitments from leading AI companies. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have committed to moving toward safe, secure, and transparent development of AI technology. These commitments include security testing of AI products before release, sharing best practices for AI risk management, investing in cybersecurity, informing users if they interact with AI-generated content, and researching mechanisms to protect users from bias and discrimination.

Other regulations affecting AI systems: the Privacy Act of 1974 protects personal information collected by the federal government; the Fair Credit Reporting Act protects consumer reporting data; the Gramm-Leach-Bliley Act (GLBA) covers data privacy for financial institutions; and the Health Insurance Portability and Accountability Act protects medical records. 

AI regulations worldwide

United Kingdom

The United Kingdom takes a deliberately agile and iterative approach to regulating AI. In 2023, the government published a white paper that formulated a framework on which the UK will build its AI policies. The document proposes five principles to guide the responsible development and use of AI: 

  • AI systems should be robust, secure, and safe. 
  • AI systems should be appropriately transparent and explainable. 
  • AI systems should not undermine legal rights, discriminate, or create unfair market outcomes. 
  • AI systems should require effective oversight and clear lines of accountability across the AI life cycle.
  • People should be able to contest a harmful AI decision or outcome. 

Rather than immediately turning these principles into laws, the UK started with a flexible, non-statutory approach. This means there will be no strict new rules for businesses right away. Instead, existing regulators will apply these principles, adapting them to specific industries. This will safeguard the responsible use of AI without setting back its rapid development.

Along with the principles of responsible AI, the United Kingdom has also adopted the National AI Strategy. The document outlines a 10-year plan to make the UK a global AI superpower. It focuses on investing in the AI ecosystem, ensuring AI benefits all economic sectors, and governing AI effectively.

Other regulations affecting AI systems: Data Protection Act 2018 (the UK’s implementation of the General Data Protection Regulation), Consumer Rights Act 2015, Consumer Protection Act 1987

Canada

Currently, Canada has no regulatory framework specific to AI. The Artificial Intelligence and Data Act (AIDA) was proposed in 2022 but is not yet a law. If accepted, the Act would be the key legislative framework that regulates the responsible development and deployment of AI systems in Canada.

AIDA is designed to ensure that AI systems are safe and non-discriminatory and that businesses are held accountable for AI systems under their control. It introduces the following requirements for high-impact AI systems at every stage of their lifecycle:

  • System design. At this stage, companies must initially assess the risks of using AI and address potential biases. 
  • System development. Businesses must document the datasets and models they use, evaluate and validate AI systems' outputs, and build mechanisms for human oversight and monitoring. 
  • System deployment. When deploying AI systems, companies must have risk mitigation strategies and implement continuous monitoring.
  • Operating the system. When in production, AI system operators must log and monitor their outputs and ensure human oversight and monitoring.
High-impact AI systems AIDA
Examples of high-impact AI systems. Source: The Artificial Intelligence and Data Act (AIDA) – Companion document

Until AIDA becomes an enforceable law, companies can follow the Voluntary Code of Conduct that provides common standards for responsible development and use of generative AI systems, including accountability, safety, transparency, and human oversight and monitoring.

At the same time, there have already been legal cases involving the use of conversational systems. Air Canada airline was ordered to compensate a passenger after a chatbot provided incorrect information about refund policies. This highlights AI hallucinations as one of the risks AI system developers must address and test for.

Other regulations affecting AI systems are the Privacy Act and the Personal Information Protection and Electronic Documents Act (PIPEDA).

Brazil

Brazil is currently developing a legal framework to regulate AI. According to the proposed bill (in Portuguese, translation available), AI providers and operators must ensure the security of AI systems and their compliance with individual rights, including:

  • Informing people when they’re interacting with AI,
  • Preventing discriminatory biases,
  • Ensuring compliance with data protection laws,
  • Properly separating and managing data for training, testing, and validating AI systems,
  • Implementing information security measures at all stages of the AI system lifecycle.

Like the EU AI Act, the bill adjusts its requirements for AI systems based on their assessed risk level. It requires that AI systems pass a preliminary risk check before being used. If a system is flagged as high-risk, it will need an algorithmic impact assessment and governance to ensure its safety. Examples of high-risk AI systems include autonomous vehicles, healthcare applications, and biometric identification systems. AI systems used in automated employment decisions, professional training, and credit scoring are also considered high-risk. 

Additional regulations apply to providers and operators of these systems. For example, they must use automatic logging to evaluate system performance and detect biases, perform reliability tests, and enable the explainability of AI outputs.

Other regulations that may affect the development and use of AI systems are the General Data Protection Law, which protects personal data; the Consumer Protection Code, which regulates relations between consumers and suppliers; and intellectual property laws.

Japan

Similarly to the UK, Japan is taking a soft law approach to regulating AI. The government published AI Guidelines for Business that are not legally binding but encourage companies to follow the principles of responsible AI. In addition to generally recognized AI principles like safety, transparency, and accountability, the guidelines include more specific recommendations for AI systems' developers, providers, and business users. 

For example, AI developers should deploy a data management system that controls access to sensitive data to protect privacy, while AI business users should consider bias in input data or prompts to ensure the fairness of AI systems. 

AI regulations in Japan
Positioning of the guidelines. Source: AI Guidelines for Business

One of the Japanese political parties has also suggested an enforceable law for AI. The proposed bill targets AI foundational models and suggests obligations similar to voluntary commitments for AI companies in the US. For example, under the bill, developers of foundational models with significant social impact would conduct red team testing and share results with the government. If passed, the bill will shift the Japanese AI regulation policies from soft to hard law.

Other regulations that may affect the development and use of AI systems are The Digital Platform Transparency Act and the Act on the Protection of Personal Information

Summing up

AI regulations evolve rapidly, and they’re shaping the industry's future. Governments demand safer, fairer, and more transparent AI, and users expect the same. For teams building AI-powered products, these requirements involve systematic changes in how AI apps are developed and operated. 

To comply, you can start by examining how your AI system interacts with users and perform a risk assessment of what could go wrong. Even if the AI systems you are currently building are not considered high-risk, the regulation might evolve in the future. 

Industry dynamics and competitive pressures may also set the tone for AI transparency, testing, and observability expectations. Forward-looking companies may need to adopt higher standards proactively to remain competitive and build user trust, regardless of formal regulations. To stay ahead, you need to implement robust testing, evaluation, and monitoring processes for your AI systems. 

________

Evidently Cloud is a collaborative AI observability platform for teams working on AI-powered products, from chatbots to RAG. It is built on Evidently, a trusted open-source framework for evaluating and observing ML and LLM systems with over 25 million downloads. With Evidently Cloud, you can trace your AI-powered app, create synthetic datasets for effective testing, run LLM evaluations, and track the AI quality over time. Get demo to see the platform in action!

Disclaimer: This blog is for informational purposes only. It is not intended to and does not constitute legal advice. Readers seeking legal advice should consult a qualified attorney or legal professional.

You might also like

🎓 Free course on LLM evaluations for AI product teams. Sign up

Get Started with AI Observability

Book a personalized 1:1 demo with our team or sign up for a free account.
Icon
No credit card required
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.