contents
Artificial intelligence (AI) transforms industries and reshapes how we live and work. But as AI systems grow more powerful, how do we ensure they are safe, fair, and trustworthy? Governments worldwide are stepping up to guide how AI-powered products are built, deployed, and used.
The global regulatory landscape is evolving quickly. Some countries enforce strict laws, while others roll out voluntary guidelines. Despite these differences, the core message is that AI must be developed responsibly. This concerns not only foundation models like Google Gemini, Anthropic Claude, or OpenAI GPT-4 but also AI applications built on top of them.
If you’re developing AI-powered products—from chatbots to hiring assistants to image generation features—compliance with AI regulations is a crucial consideration. And it isn’t just a box to check; it translates into specific requirements that affect how you design, build, test, and monitor your AI systems.
In this guide, we’ll break down key AI regulations across major markets and explain what they mean for teams building AI-powered products. We’ll cover:
Let’s get started!
The Artificial Intelligence Act of the European Union—or EU AI Act for short—is the primary legislative framework that regulates the development and use of artificial intelligence in the European Union. It applies to providers and developers of AI systems located within and outside the EU if these systems are intended to be used in the EU.
The EU AI Act establishes distinct regulations for AI models and downstream AI systems and applications built on top of these models. For example, a large language model (LLM) like GPT-4o is a model, while a support chatbot is an AI system powered by it. In this guide, we’ll focus on regulations for AI systems and applications.
Let’s dive in!
The EU AI Act applies different rules to AI systems based on the risk they pose. Any AI system can be categorized into four possible risk levels.
Unacceptable risk. The AI Act bans AI systems that threaten safety or fundamental rights. Examples of prohibited practices include AI systems that manipulate people's behavior or exploit their vulnerabilities, social scoring systems, and systems for real-time remote biometric identification. (Article 5).
High risk. AI systems used in critical areas such as healthcare, education, employment, essential private and public services, and law enforcement are classified as high-risk. (See Article 6 and Annex III). Examples are AI systems used in educational and vocational training to determine admission or evaluate learning outcomes or AI systems used in credit scoring.
These high-risk systems are subject to regulatory requirements, including:
This framework considers both providers who make high-risk systems available at the market and deployers who use AI systems inside their organization. For example, deployers must monitor the operation of a high-risk AI system and keep logs. (Article 26).
Limited risk. Certain AI systems where users directly interact with AI trigger transparency obligations. Users must be informed that they interact with an AI system unless it is already obvious from the context. Deployers of emotion recognition systems must also inform users who are exposed to the operation of the system. Apps for manipulating and generating synthetic content like deepfake videos must mark such content as artificially generated. (Article 50).
Minimal risk. Other AI-powered systems, like AI recommender systems or spam filters, have minimal risk and can be deployed without regulatory restrictions. Although these systems are not directly regulated, the Act suggests they follow general AI ethics principles.
Table 1. Regulations for AI systems based on their risk level under the EU AI Act as published in August 2024.
The AI Act came into force in August 2024, but its provisions go into effect in stages. For example, prohibited AI practices are banned from February 2025, and the rules for high-risk AI systems take effect from August 2026. The penalties for non-compliance with the AI Act range from €7.5 million to €35 million or 1% to 7% of the company’s global annual turnover.
It’s worth noting that while the EU Act follows the risk-based approach, it also references the non-binding 2019 Ethics guidelines for trustworthy AI. These guidelines encourage all developers of AI systems to follow the seven core principles. They include human agency and oversight, technical robustness and safety (including resilience against attacks), privacy and data governance, transparency (with appropriate traceability), diversity, non-discrimination and fairness, and societal and environmental well-being.
Other regulations affecting AI systems. General Data Protection Regulations (GDPR) regulate data processing relevant to many AI systems. The updated Product Liability Directive (PLD) classifies AI systems as products, making their developers liable if defective AI-powered products cause harm. The European Commission also proposed an AI Liability Directive (AILD) to complement the PLD and address cases of negligence or failure to meet regulatory requirements for AI systems.
While existing federal laws have limited applications to AI, several frameworks and initiatives guide the development and use of AI systems in the United States. The Blueprint for an AI Bill of Rights is one of them. It was published in 2022 and outlines five principles of protecting Americans’ civil rights in the AI age:
Safe and effective systems. The Blueprint suggests that AI systems should be extensively tested before deployment and continuously monitored when in production. The ongoing monitoring should include evaluating performance metrics and testing the system’s outputs.
Algorithmic discrimination protection. Algorithmic discrimination occurs when AI systems contribute to unjustified treatment or disfavor people based on their race, sex, religion, age, disability, or other characteristics protected by law. To prevent it, developers of AI systems should use representative datasets and conduct equity assessments and disparity testing.
Data privacy. According to the Blueprint, individuals should be protected from privacy violations and have agency over how their data is used. To achieve that, developers of AI systems should collect personal information only if it is strictly necessary and ensure that consent requests are brief and clear. Enhanced data protection should be implemented in sensitive domains, such as health, employment, education, criminal justice, and finance.
Notice and explanation. Developers and deployers of AI systems should provide “accessible plain language documentation” that describes the general purpose of the AI system and its functions. They should also notify users that the AI system is in use and clearly explain its outputs.
Human alternatives, consideration, and fallback. The Blueprint suggests that users should be able to opt out of AI systems in favor of a human alternative, e.g., switch from a chatbot to a human agent. If the AI system fails or a user wants to appeal, fallback and escalation processes must be in place.
While the Blueprint for an AI Bill of Rights is not an enforceable law, it encourages companies to account for AI observability and data quality so users can interact with more reliable and secure AI products and applications.
The Executive Order on Safe, Secure, and Trustworthy AI outlines a framework to address the risks posed by AI systems while ensuring their development. Among other things, it requires developers of high-impact AI systems to share red-teaming results with the government. It also requires federal agencies to develop testing standards to ensure AI safety before public release and establish best practices for detecting AI-generated content.
There are also several state and local regulations for AI-powered products. For example:
Voluntary commitments from leading AI companies. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have committed to moving toward safe, secure, and transparent development of AI technology. These commitments include security testing of AI products before release, sharing best practices for AI risk management, investing in cybersecurity, informing users if they interact with AI-generated content, and researching mechanisms to protect users from bias and discrimination.
Other regulations affecting AI systems: the Privacy Act of 1974 protects personal information collected by the federal government; the Fair Credit Reporting Act protects consumer reporting data; the Gramm-Leach-Bliley Act (GLBA) covers data privacy for financial institutions; and the Health Insurance Portability and Accountability Act protects medical records.
The United Kingdom takes a deliberately agile and iterative approach to regulating AI. In 2023, the government published a white paper that formulated a framework on which the UK will build its AI policies. The document proposes five principles to guide the responsible development and use of AI:
Rather than immediately turning these principles into laws, the UK started with a flexible, non-statutory approach. This means there will be no strict new rules for businesses right away. Instead, existing regulators will apply these principles, adapting them to specific industries. This will safeguard the responsible use of AI without setting back its rapid development.
Along with the principles of responsible AI, the United Kingdom has also adopted the National AI Strategy. The document outlines a 10-year plan to make the UK a global AI superpower. It focuses on investing in the AI ecosystem, ensuring AI benefits all economic sectors, and governing AI effectively.
Other regulations affecting AI systems: Data Protection Act 2018 (the UK’s implementation of the General Data Protection Regulation), Consumer Rights Act 2015, Consumer Protection Act 1987.
Currently, Canada has no regulatory framework specific to AI. The Artificial Intelligence and Data Act (AIDA) was proposed in 2022 but is not yet a law. If accepted, the Act would be the key legislative framework that regulates the responsible development and deployment of AI systems in Canada.
AIDA is designed to ensure that AI systems are safe and non-discriminatory and that businesses are held accountable for AI systems under their control. It introduces the following requirements for high-impact AI systems at every stage of their lifecycle:
Until AIDA becomes an enforceable law, companies can follow the Voluntary Code of Conduct that provides common standards for responsible development and use of generative AI systems, including accountability, safety, transparency, and human oversight and monitoring.
At the same time, there have already been legal cases involving the use of conversational systems. Air Canada airline was ordered to compensate a passenger after a chatbot provided incorrect information about refund policies. This highlights AI hallucinations as one of the risks AI system developers must address and test for.
Other regulations affecting AI systems are the Privacy Act and the Personal Information Protection and Electronic Documents Act (PIPEDA).
Brazil is currently developing a legal framework to regulate AI. According to the proposed bill (in Portuguese, translation available), AI providers and operators must ensure the security of AI systems and their compliance with individual rights, including:
Like the EU AI Act, the bill adjusts its requirements for AI systems based on their assessed risk level. It requires that AI systems pass a preliminary risk check before being used. If a system is flagged as high-risk, it will need an algorithmic impact assessment and governance to ensure its safety. Examples of high-risk AI systems include autonomous vehicles, healthcare applications, and biometric identification systems. AI systems used in automated employment decisions, professional training, and credit scoring are also considered high-risk.
Additional regulations apply to providers and operators of these systems. For example, they must use automatic logging to evaluate system performance and detect biases, perform reliability tests, and enable the explainability of AI outputs.
Other regulations that may affect the development and use of AI systems are the General Data Protection Law, which protects personal data; the Consumer Protection Code, which regulates relations between consumers and suppliers; and intellectual property laws.
Similarly to the UK, Japan is taking a soft law approach to regulating AI. The government published AI Guidelines for Business that are not legally binding but encourage companies to follow the principles of responsible AI. In addition to generally recognized AI principles like safety, transparency, and accountability, the guidelines include more specific recommendations for AI systems' developers, providers, and business users.
For example, AI developers should deploy a data management system that controls access to sensitive data to protect privacy, while AI business users should consider bias in input data or prompts to ensure the fairness of AI systems.
One of the Japanese political parties has also suggested an enforceable law for AI. The proposed bill targets AI foundational models and suggests obligations similar to voluntary commitments for AI companies in the US. For example, under the bill, developers of foundational models with significant social impact would conduct red team testing and share results with the government. If passed, the bill will shift the Japanese AI regulation policies from soft to hard law.
Other regulations that may affect the development and use of AI systems are The Digital Platform Transparency Act and the Act on the Protection of Personal Information.
AI regulations evolve rapidly, and they’re shaping the industry's future. Governments demand safer, fairer, and more transparent AI, and users expect the same. For teams building AI-powered products, these requirements involve systematic changes in how AI apps are developed and operated.
To comply, you can start by examining how your AI system interacts with users and perform a risk assessment of what could go wrong. Even if the AI systems you are currently building are not considered high-risk, the regulation might evolve in the future.
Industry dynamics and competitive pressures may also set the tone for AI transparency, testing, and observability expectations. Forward-looking companies may need to adopt higher standards proactively to remain competitive and build user trust, regardless of formal regulations. To stay ahead, you need to implement robust testing, evaluation, and monitoring processes for your AI systems.
________
Evidently Cloud is a collaborative AI observability platform for teams working on AI-powered products, from chatbots to RAG. It is built on Evidently, a trusted open-source framework for evaluating and observing ML and LLM systems with over 25 million downloads. With Evidently Cloud, you can trace your AI-powered app, create synthetic datasets for effective testing, run LLM evaluations, and track the AI quality over time. Get demo to see the platform in action!
Disclaimer: This blog is for informational purposes only. It is not intended to and does not constitute legal advice. Readers seeking legal advice should consult a qualified attorney or legal professional.