Key Terminologies under Vietnam’s AI Law
The Law on Artificial Intelligence (AI Law), which was passed by the National Assembly on 10 December 2025, is arguably among the most anticipated pieces of legislation of Vietnam in 2025.
Unfortunately, similar to the Law on Digital Technology Industry, Vietnam’s AI Law still feels like a half-baked legislation, which makes it hard to clearly identifying the key players in the artificial intelligence (AI) value chain. This article would examine several key terminologies under the AI Law.
1. Artificial Intelligence
“Artificial Intelligence means the implementation, through electronic means, of human intellectual capabilities, including learning, reasoning, perception, judgment, and natural language understanding.”
This definition seems to be borrowed (almost word-for-word) from Korea’s Basic Act on the Development of Artificial Intelligence and the Establishment of Foundation for Trustworthiness, which, translated into English, reads as: “Artificial Intelligence (AI) means the implementation of human intellectual abilities, such as learning, reasoning, perception, judgment, and language understanding, through electronic methods.”
We understand this as the government’s attempt to capture a technological concept in legal terms. However, we find the above definition of artificial intelligence raises a few issues:
Unascertained criteria: The AI Law failed to clarify what would be considered an electronic implementation of learning, reasoning, perception, judgement, or understanding of natural language. For example, how could one determine that something electronically exerts the human intellectual ability of learning, or what would it mean to understand natural language electronically (input-wise, output-wise, or both), or what even is “natural language” – does it also cover languages other than human’s;
Ambiguous conditions: It is also unclear whether something must exert all of the listed human intellectual capabilities in order to be considered as “artificial intelligence”. If that is the case, it seems that the government of Vietnam would not consider things that does not “understand” natural language as artificial intelligence (for example, special purpose image-processing artificial intelligence models that only processes and output images according to set parameters instead of natural language input prompts are unlikely to be considered “understanding” natural language); and
Linguistic error: There seems to be a mismatch between the concept being defined (i.e., intelligence) and the concept being used to define (i.e., implementation). Instead of defining the nature of artificial intelligence (what it is), the AI Law ran off to describe the operation of artificial intelligence (how it functions).
Since the “intelligence” nature of artificial intelligence is still debatable, we argue that this legal definition of a technological concept is redundant for the purpose of regulating AI technology. AI models are software artifacts (code + weights) that require physical hardware to be stored and executed; any deployed AI system therefore depends on compute infrastructure. Therefore, regulating the artificial intelligence systems (the what) and their function (the how) should be enough for governance purposes.
2. AI System
“An Artificial Intelligence system means a machine-based system designed to execute artificial intelligence capabilities with varying levels of autonomy, and capable of adaptation post-deployment; based on explicitly defined or implicitly formed objectives, [the system] infers from input data to generate outputs—such as predictions, content, recommendations, or decisions—that may influence physical or digital environments.”
Vietnam’s lawmakers seem to have adapt the European Union’s definition of AI system under the EU AI Act: “‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
However, the AI Law failed to clarify the ‘autonomy’ and ‘adaptability’ of AI system while these have already been explained under the EU AI Act.
3. Developers
"A Developer means an organisation or individual that designs, develops, trains, tests, or fine-tunes all or part of the [artificial intelligence] model, [artificial intelligence] algorithm, or artificial intelligence system, and has direct control over [training] technical methodologies, training data, or model parameters."
Due to the ambiguity of the Vietnamese grammar under the definition, it is unclear (1) whether the term “artificial intelligence” also modifies “model” and “algorithm” apart from “system” and (2) whether the term “training” also modifies “technical methodologies” apart from “data”. In machine lerning terminology, one trains/fine-tunes models, while algorithms are the procedures used to train them; thus, ‘training/fine-tuning an algorithm’ is technically imprecise.
The AI Law also fails to provide legal definitions of the key terms used to define “developer”, including [artificial intelligence] model, [artificial intelligence] algorithms, [training] technical methodologies, training data, and model parameters.
Moreover, this broad and ambiguous definition arguably cover developers who provide AI components under a free and open-source license (Open Source Devs) that was later on being integrated into a deployed AI system by another person. Meanwhile, the AI Law requires developers to be jointly responsible for the function and malfunction of an AI system along with the provider of said AI system. Therefore, as of now, Open Source Devs might be unduly dragged into taking responsibility for something that they did not put into services, regardless of how little (if any) their codes contributed to the AI system’s malfunction. This might make Open Source Devs reluctant to allow Vietnamese AI system providers to use their codes, which could in turn hamper the advancement of AI technology in Vietnam.
4. Provider
"A Provider means an organisation or individual that places an artificial intelligence system on the market or puts it into service under its own name, trade name, or trademark, regardless of whether the system was developed by itself or by a third party."
The AI Law once again failed to clarify what does it mean to place an AI system on the market, or to put an AI system into service, while these have already been explained under the EU AI Act.
5. Deployer
"A Deployer means an organisation or individual utilising an artificial intelligence system under their control in professional, commercial, or service-provision activities; this excludes use for personal, non-commercial purposes."
It is unclear whether a state authority could be considered a deployer (i.e., could their use of AI system for their public functions counted as usage in professional or service-provision activities). We argue that the public sector should be treated similarly to private sector regarding the use of AI system.
Furthermore, due to the broad wording of the definitions of user (as discussed below) and deployer, there is no clear separation between the deployer and user. We are unclear whether it is the intention of the lawmakers for a person to be both a user and a deployer at the same time.
6. User
"A User means an organisation or individual that directly interacts with an artificial intelligence system or utilises the outputs generated by said system."
Due to the ambiguity of the Vietnamese grammar under the definition, it is unclear whether the word “directly” also modify the clause “utilises the outputs […]”. The AI Law also failed to clarify what direct interaction with an AI system and [directly] utilising the outputs generated by AI system mean. For example:
As an AI system has many layers, including hardware and software, what would be considered a direct interaction with an AI system? In the real world, most end users interact with AI products/services (e.g., ChatGPT, Claude, Gemini) rather than calling foundation models directly via an API (e.g., OpenAI GPT‑5.x / GPT‑5.2 series, Anthropic Claude Opus 4.5, Google Gemini 3 Pro). If it was the intention of the lawmakers for the term User to refer only to end-users, the current definition is too broad;
How would the generated text from an AI system be considered as “utilized” (e.g., at the time it has been read by the user, or when it was incorporated into a text which would be published by the user)?
If person who did not directly interact with an AI system unknowingly “use” its generated text that was shared to such person by another person, would the first mentioned person be considered a user?
It is also unclear how to determine an organisation has been directly interacting with an AI system or using its outputs. For example, if an employee of a corporation uses AI services to generate a report for his/her work, would it be considered as such corporation’s action.
This blog post is written by Le Thanh Nhat.