The vote on the bill that aims to regulate Artificial Intelligence (AI) in Brazil was postponed in a Senate Committee, this Tuesday (3), due to the opposing position of opposition parliamentarians on the requirement that AI systems respect the integrity of information and combat disinformation. A deadline was given until next Thursday (5) for the Special Commission created to analyze the issue to reach a consensus.
“There are still points in the report whose wording generated questions from the opposition and the government. The rapporteur agreed on these points that he will give both sides 48 hours to express themselves and we will vote next Thursday (5)”, explained the president of the Commission, senator Carlos Viana (Podemos-MG).
According to the rapporteur, senator Eduardo Gomes (PL-TO), opposition parliamentarians are against the sections of the project that require the integrity of information produced by artificial intelligence.
Article 2 of bill 2,338 of 2023, authored by the president of the Senate, Rodrigo Pacheco (PSD-MG), determines that the development and use of AI systems are based on the “integrity of information through the protection and promoting reliability, precision and consistency of information”.
Elsewhere, the bill determines that, before making an AI system available on the market, it is necessary to demonstrate, through testing, that the system can identify and reduce risks to the integrity of information and against “the spread of misinformation and of speeches that promote hatred or violence”.
Debate
Just as occurred in the debate on the so-called fake news bill during its processing in the Chamber of Deputies, opposition parliamentarians have argued that the inclusion of these devices could lead to the censorship of content on the internet.
Senator Marcos Rogério (PL-RO) presented an amendment to suppress the requirement for information integrity in AI systems, claiming that the authority created to supervise AI in Brazil would have the power to define what content platforms would have to remove.
When making changes to the text, rapporteur Eduardo Gomes stated that “the concept of information integrity was revised to make it clear that it is instrumental in promoting freedom of expression, and not be used for purposes of censorship or violation of other fundamental rights” .
Digital law specialist Alexandre Gonzales, who works at the Coalition Rights on the Network, an organization that brings together 50 entities that fight for rights on the internet, highlighted to Agência Brasil that the argument that the fight against disinformation is censorship is not valid because the analysis of Information integrity would not be done in specific cases or profiles at risk of censorship, but in the AI system as a whole.
“This part of the project requires large companies, through the authority that will coordinate this governance and regulation process, to present a minimum assessment report on how they perceive their systems to be performing in relation to a series of possible risks”, he explained.
Also this Tuesday (3), project rapporteur Eduardo Gomes excluded AI systems used by digital platformsthe calls big techsfrom the list of AIs considered high risk.
Project
Authored by the President of the Senate, Rodrigo Pacheco (PSD/MG), the text establishes the fundamental principles for the development and use of AI. It defines that technology must be transparent, safe, reliable, ethical, free from discriminatory bias, respecting human rights and democratic values. Furthermore, the project requires that technological development, innovation, free enterprise and free competition be considered.
The project also defines which AI systems should be considered high risk and, therefore, need stricter regulation, in addition to prohibiting the development of technologies that cause harm to health, safety or other fundamental rights.
It also prohibits the Public Authorities from creating systems that classify or rank people based on social behavior to access goods and services and public policies “in an illegitimate or disproportionate way” or that facilitate the sexual abuse or exploitation of children and adolescents.
Governance
Two governance structures for technology regulation are planned to monitor the application of legislation, with the creation of the National AI Regulation and Governance System (SIA), responsible for “exercising full normative, regulatory, supervisory and sanctioning competence for development, implementation and use of artificial intelligence systems for economic activities in which there is no specific sectoral regulatory body or entity”.
The SIA will also be responsible for regulating the classification of high-risk AI systems, those that must have stricter monitoring, including permanent analyzes of the algorithmic impact, that is, an assessment of how the algorithm is acting.
The other body is the Permanent Regulatory Cooperation Council (CRIA), linked to the Ministry of Labor, which must regulate labor relations impacted by AI. Among the objectives of CRIA is to value collective negotiations, enhance the positive effects of AI on workers, in addition to “mitigating the potential negative impacts on workers, especially the risks of job displacement and career opportunities related to AI”.
In addition to these structures linked to the Executive Branch, agents who work with AI must, according to the project, have internal governance structures and analysis of potential risks that may be caused by the tools developed. These private actors will also need to classify AI systems according to risk levels, with stricter application and monitoring rules for systems considered high risk.