Late last month, the EU formally adopted the EU AI Act, the first comprehensive and legally binding AI framework adopted anywhere in the world. Somewhat like the GDPR, the EU AI Act has extraterritorial application, and it is likely for this act to become the international legislative standard that countries around the world replicate when developing their AI legal frameworks.
The emergence of AI technology has significant implications from a variety of legal perspectives: from privacy and intellectual property, to safety and criminal liability. At first instance, the EU AI Act might appear to be an extension of the GDPR as a privacy legislation, but in fact, it is closer in structure and approach to an EU product safety regulation, as it adopts a risk-based approach where the regulatory burdensomeness imposed on an AI system will depend on the level of risk imposed by such a system. Accordingly, the EU AI Act identifies four levels of risk:
- Unacceptable Risk AI Systems: The AI Act completely bans AI systems that are incompatible with EU fundamental rights, such as social scoring systems, predictive policing, and real-time remote biometrics systems. Some of these systems are already used in places such as China and they are now considered banned in the EU.
- High-Risk AI Systems: Systems that negatively affect safety or fundamental rights if misused, such as those used for the identification of persons, in education, and for the purpose of access to services and benefits. Such systems are subject to stringent rules including requirements for assessments prior to the introduction of the systems into the EU.
- Limited Risk AI Systems: Systems that have a limited risk for manipulation and deceit, such as chatbots, have less stringent rules, such as transparency requirements that require disclosing the use of an AI system.
- Minimal Risk AI Systems: A system that does not fall under any of the previous categories is considered a minimal risk system. The example commonly used is spam filters. Such systems are not subject to any regulations.
The EU AI Act was in the making for a number of years before ChatGPT was made popular, and accordingly, it does not use or refer to the concept of generative AI. However, it has special provisions for General Purpose AI systems, and imposes additional transparency obligations for these systems with some stricter rules if the AI system is powerful enough to meet the definition of “systemic risk” used by the AI Act. It is also worth noting that the EU AI Act passingly mentions that the providers or General Purpose AI must comply with copyright law.
The nature of the obligations imposed under the EU AI Act depends on whether a person is considered a Provider or a Deployer of an AI system. The Provider is the person responsible for developing the AI System, while the Deployer is the person who is using an AI system developed by another person. Under the EU AI Act, most burdensome obligations are imposed on the Provider, and not on the Deployer.
The fines for violating the EU AI Act are significant, and can be up to 35 million Euros or 7% of the annual turnover of a company. The EU AI Act is expected to enter into force in phases, with the full implementation taking place after two years.
The EU AI Act is likely to have implications for Oman in a number of ways. Omani companies that are doing business in Europe, such as Oman Air and others, will have to make sure that they comply with the EU AI Act when deploying AI systems for their operations, especially as the law applies to some public-facing common AI tools such as chatbots. It is also extremely likely for provisions of the EU AI Act to be directly or indirectly transplanted into the Omani legal system. Given the lack of any other comprehensive legal framework for governing AI to be used as a benchmark, Omani policymakers might adopt some of the concepts found in the EU AI Act when developing an Omani framework for the governance of AI. It will also not be unlikely for the GCC to develop a pan-GCC legal instrument, whether a treaty or a model law, that governs AI, which uses some of the concepts found in the EU AI Act.
The Omani Personal Data Protection Law is an example of how provisions of a European law were transplanted into the Omani legal system. However, this same law also illustrates the unwillingness of the Omani government to impose restrictions on itself even when this is necessary for the protection of fundamental rights. For example, the Omani Personal Data Protection Law completely exempts the government from its scope leaving members of the public with no recourse against the government when government employees violate data protection rights.
It will not be surprising if the Omani government chooses not to limit its ability to use any AI system available to it, including those that the EU has decided to totally ban, such as social scoring systems and predictive policing, some of which are already used in places such as China.
The EU AI Act will fully enter into force after two years. You can read it in full on this link.