The world's first Artificial Intelligence Act

Aug 22 2024 | Activities

On March 13, 2024, the world’s first Artificial Intelligence (AI) Act was passed by the European Union (EU) Parliament. This Act sets both “hard” boundaries and “soft” mechanisms that Vietnam can refer to when formulating policies regarding AI technology.

  • The world's first Artificial Intelligence Act was passed by the European Union (EU) Parliament on March 13, 2024.

  • The Act establishes a boundary between AI development and human development.

  • From this, the lesson for Vietnam is that strategy adjustments, policy planning, and solution development need to be specific and timely.

The world's first Artificial Intelligence Act

Share

AI Act – One Law, Two Objectives, and Three Noteworthy Points

The AI Act was passed amid ongoing global debates about the role and real impact of AI and the appropriate ways to regulate this technology.

The AI Act seems to carry the high expectations of EU policymakers—a region that has always been at the forefront of establishing new legal standards for digital technologies. The Act, consisting of 113 articles and 459 pages (including introduction and annexes), aims to simultaneously address two objectives: (1) ensuring human safety, including the protection of fundamental rights of citizens, democratic values, the rule of law, and environmental sustainability against challenges posed by AI, and (2) promoting innovation, making the EU a leading region in this field.

To achieve these dual objectives, EU lawmakers have designed an AI management mechanism based on the assessment of the technology’s risks to humans. There are three noteworthy points:

First, the AI Act sets “hard” boundaries for the development of AI in relation to human development. The existence, freedom, safety, and dignity of humans are the highest protected values. Therefore, AI applications that infringe upon these values are banned in the EU, except in certain special cases accompanied by strict requirements. Lawmakers are not concerned with what the AI technology is called or what algorithms it is developed on, but rather with the purpose for which the technology is applied in life.

For instance, AI applications are banned from being used to recognize emotions in workplaces and schools, for social scoring, to create or expand facial recognition databases, or to exploit human vulnerabilities (such as age, economic status, social status, disabilities) to manipulate their behavior, or to use biometric classification systems to infer a person’s race, political views, religion, or sexual orientation.

As an exception, law enforcement agencies are permitted to use real-time biometric recognition systems in public areas under three conditions: (1) The situation falls within specific cases prescribed by law, such as targeted searches for victims of human trafficking, missing persons, or the prevention of a terrorist attack. (2) The nature of the situation must be weighed against the consequences of using or not using the biometric recognition system, and necessary safety measures must be ensured according to national law if used. (3) Approval must be obtained from a judicial or independent administrative authority before use. In emergencies, law enforcement may use the system first and seek approval within 24 hours afterward; if approval is not granted, all data must be immediately deleted.

Additionally, for other AI applications that are not banned, the AI Act imposes a series of new obligations requiring transparency and risk control from AI developers and users. For example, AI applications used to manage and operate critical infrastructure systems, roads, and essential services (electricity, water, gas) are classified as high-risk and must be subject to human oversight.

General-Purpose AI systems like ChatGPT must comply with copyright regulations and disclose training data. General-Purpose AI models that pose systemic risks must meet additional requirements, such as model evaluations, systemic risk assessments, and incident reporting.

Second, the AI Act provides “soft” mechanisms to support the innovation and development of AI. First and foremost is the testing mechanism. Compared to the 2021 draft, the provisions on the testing mechanism in the official document have been expanded with four additional articles, increasing the total number from three to seven articles, with the content of each article also significantly expanded, to create a safe and stable legal environment for AI developers and users.

The AI testing mechanism is divided into two types: (1) the regulatory sandbox mechanism, which allows AI developers to train and test AI within a certain timeframe and space under the control of a competent state authority before bringing products and services to the market, and (2) real-world testing outside the sandbox, which allows providers to test AI at any time before bringing services and products to the market, provided that approval and oversight from the market surveillance authority are obtained.

Moreover, EU lawmakers encourage AI developers, professional associations, social organizations, and academic research institutions to participate in developing codes of conduct for non-high-risk AI. At the same time, the EU Commission will develop guidelines for the implementation of the AI Act, paying particular attention to the needs of small and medium-sized enterprises and startups.

Third, the AI Act serves as the legal basis for establishing organizations to ensure or support the implementation of the Act. At the regional level, the EU will establish four organizations: the Artificial Intelligence Board to oversee the implementation of the AI Act, the AI Office to oversee the development of AI expertise, the Advisory Forum composed of stakeholders (businesses, social organizations, and the academic research sector) to provide technical expertise and advice to the Board and the EU Commission, and a Scientific Panel of independent experts to support the enforcement activities of the Act.

At the national level, each EU member state must establish or designate two agencies: a market surveillance authority to ensure that AI services and products put into use comply with legal regulations, and a notifying authority to assess, designate, notify, and monitor the activities of conformity assessment bodies.

Lessons for Vietnam

Over the past nearly 30 years, since Vietnam officially connected to the Internet in 1997, digital technology has penetrated and brought changes to social relationships governed by law.

It wasn’t until around 2020 that Vietnam began amending and issuing a series of legal documents, such as the 2022 Cinema Law, the 2022 Intellectual Property Law (amended), the 2023 Telecommunications Law, the 2023 Electronic Transactions Law, Decree 85/2021/ND-CP amending and supplementing Decree 52/2013/ND-CP on e-commerce, Decree 13/2023/ND-CP on the protection of personal data, and is expected to continue amending and issuing new documents such as the 2024 Advertising Law (amended) and the Decree replacing Decree 72/2013/ND-CP on the management, provision, and use of Internet services and online information, and the Law on the Protection of Personal Data, to address changes in legal relationships due to the emergence of digital technology.

If the legal policy lag continues to be about 20 years behind the emergence and development of technology, by around 2030-2040, Vietnam will begin to consider updating legal policies due to changes brought about by AI. By then, will the policy still play a role in promoting development and managing risks?

In the context of the speed of AI development and human readiness, Vietnamese policymakers should consider adjusting the 2021 national AI strategy and developing specific solutions to harness AI development for the economy and manage AI’s risks to society. In this context, “soft” mechanisms such as regulatory sandboxes, product testing mechanisms, and codes of conduct for safe AI development should be considered.

In safe AI development guidelines, recommendations could be made not to apply AI for certain specific purposes, similar to the “hard” boundaries of the AI Act, as such applications would lead to violations of current criminal, civil, and administrative law.

The developments in AI technology are creating an uncertain future for everyone. Therefore, the appropriate level of state regulation, with clear policy direction, will play an important role in shaping the future of the nation.

Author: Nguyen Quang Dong - Institute for Policy Studies and Communication Development

(This translation was provided by an automated AI translation tool)