Given the urgency to minimize the risks of AI, the document does not consider the complexity of the technology
The Legal Framework for Artificial Intelligence is on its way to being approved by the Senate. Amid the urgent need to protect users from the potential risks of AI, Brazil is stepping up its pace to regulate a nascent technology.
Without opening up space for discussions with civil society, the country risks creating an erroneous legal framework that does not take into account the capabilities, threats and opportunities of AI.
In preparation since 2022, Bill (PL) 2338/23 , or the AI Legal Framework, was proposed by Senator Rodrigo Pacheco (PSD/MG) last year. Inspired by the recently approved AI Act of the European Union, the bill defines rules to categorize risks contained in AI systems, as well as presents concepts and foundations for developing the use of the technology.
In Europe, the law will only be put into practice in 2026. In Brazil, the bill is expected to reach the Senate plenary in the next two weeks, with plans to pass through the Chamber of Deputies and Congress by the end of the year. The rule is prescriptive rather than directive in nature. In other words, it is more focused on reinforcing rules and punishing those responsible than on creating principles to regulate technology.
One of the actions of the Legal Framework is, for example, to create the National AI Council to monitor and enforce the law. Other provisions deal with the responsibilities of AI operators and possible fines for non-compliance with orders.
For Sílvio Meira, chief scientist at TDS Company and one of the founders of Porto Digital, the project "looks in the rearview mirror".
"Instead of a regulation that enables AI as part of the country's future strategy, the risk is to create a regulation that greatly limits Brazil's possibilities of competing in this socio-economic-political space, which tends to be the most relevant in the next 50 years", says Meira.
PL 2338/23 starts from the description of an artificial intelligence system as a "computing system with different degrees of autonomy, designed to infer how to achieve a set of objectives, using approaches based on machine learning and/or logic and knowledge representation (...) with the aim of producing predictions, recommendations or decisions that can influence the virtual or real environment".
The project does not consider other computing systems that are currently being developed, such as quantum ones. Nor does it look at the interaction that AIs can have between their own systems to create modifications in the functioning of the algorithms. THE PROPOSAL IS MORE FOCUSED
ON REINFORCING RULES AND PUNISHING
THOSE RESPONSIBLE THAN ON CREATING
PRINCIPLES TO REGULATE TECHNOLOGY.
Furthermore, it focuses on using AI to recommend or predict, and not, for example, to automate tasks – an issue that will affect the job market.
According to the professor, who is one of the pioneers of AI research in Brazil, the project follows "outdated" precepts about technology. "You can't define the future with the ruler of the past," he says. By following this path, the proposal confuses the use of technology with the technology itself.
Taking technology for its use is like “confusing nuclear energy with 'nuclear weapons'. While the first is a general-purpose technology, the second has clear risks and impacts for society. The two are not and cannot be regulated in the same way,” Meira teaches.
HUMANS FIRST
For government representatives, the AI Legal Framework is "future-proof" precisely because it is not based on a technological vision, but on user protection. According to the secretary of digital policies at the Presidency's Social Communication Secretariat, João Brant, the risk analysis model allows for constantly adapted and updated responses.
"Human life, the right to non-discrimination and other values that the project seeks to protect are issues that transcend time. This is the problem today and will be the problem in 10, 20 years," argues Brant.
The project defines aspects of protection against discrimination, gender and racial biases by technologies. The document contains descriptions and distinctions of the impacts and consequences of AI systems that discriminate based on race, gender and sexuality. Bianca Kremer, a postdoctoral researcher at the Geneva Graduate Institute (IHEID), explains that this concern is a strong point of the AI Legal Framework. She is a full advisor to the Brazilian Internet Steering Committee (CGI.br) and holds a PhD in digital law, specializing in privacy and data protection, artificial intelligence, and algorithmic biases.
According to the lawyer, the AI Legal Framework has the necessary rigor to demand transparency and action from technology companies, while maintaining the ethical use of tools. For technology experts, however, protecting the user is essential, but it is not enough to create a complete legal framework for a general-purpose technology like AI.
GLOBAL SOUTH AI
Two other important pillars were left out of the project: employment/work and competition/competitiveness, says the chief scientist at the Institute of Technology and Society (ITS Rio), Ronaldo Lemos.
Of the 43 pages of the document, only one third of them describe the measures to foster innovation. The AI Legal Framework, as it is currently being voted on, has 45 articles. Of these, only two talk about ensuring the creation of experimentation environments for the technology.
There is no mention of professional education for retraining professionals, nor of support for emerging AI systems in Brazil. This week, President Luiz Inácio Lula da Silva defended the creation of AI from the Global South in a speech at the headquarters of the International Labor Organization (ILO). In the view of Lemos and Meira, if the AI Legal Framework is approved, this will be a reality that will be difficult to achieve.
"The way the proposal [Bill 2338/23] stands, there is a great risk that Brazil will become just another consumer of AI," says Meira. This is because the project foresees an assessment of systemic risks before the introduction of new AI systems into the market, which could slow down the creation of Brazilian technology in this area.
High-risk AI topics are considered to be the areas of autonomous vehicles, public safety and financial access, as well as education and training systems, administration of justice and infrastructure management – important topics in discussions about the future of the country.
In addition, the document also requires human oversight of high-risk AI systems. According to the rule, "people responsible for oversight must understand the capabilities and limitations of the AI system, properly monitor its operation, so that signs of anomalies, dysfunctions and performance can be addressed as quickly as possible."
The request does not take into account the unpredictability of the AI neural network. AI learning, especially from models of large databases, becomes so fast and exponential that not even the scientists who created the systems can predict or understand all the decisions it makes.
CLOSED DOORS?
From Bianca Kremer’s point of view, regulation is necessary to prevent the market from reaching extreme situations. She points out that the current state of social networks is proof of what can happen when regulations are not tough enough or take too long to be approved – which underscores the urgency of establishing rules.
"The project is important, mainly to contain the violations of rights that are already happening on a very large scale. Those who feel these impacts are in a hurry," says the researcher.
If the internet is an example, it is important to remember that the Internet Civil Rights Framework was approved in 2015, after almost six years of open discussions. This did not happen with Bill 2338/23, which remained under open discussion for less than a month.
THE PROJECT DEFINES
ASPECTS OF PROTECTION
AGAINST DISCRIMINATION,
GENDER AND RACIAL
BIASES BY TECHNOLOGIES.
"A law like this cannot be created by a closed group and with a government that fails to act as a catalyst for a broad debate," says Ronaldo Lemos, who also participated in the discussions on the Civil Framework.
Society as a whole can be impacted by AI and should be participating in this conversation. For Sílvio Meira, it takes time to test the paths of AI. "It is time to experiment, discuss and establish an AI strategy for Brazil, and a science, technology and innovation policy for AI."
For the scientist, the current moment in AI calls for less restrictive and more proactive actions on the part of the government. Instead of laws made by the Senate, he suggests state policies and more discussions about what AI is and how we want to build it. The expert advocates an approach that prioritizes transparency, “explainability” and reversibility of AI systems.
In other words, ensuring that technology is transparent, explainable, and can have decisions reversed if they are negative actions. To address these principles, the first step is to discuss technology and its impacts on society.
Regulation would come later, as a consequence of this understanding. In Lemos' view, Brazil should open the conversation to its international peers, since the entire world is being challenged by the advance of AI.
Comments