The device that considered Artificial Intelligence (AI) systems used by digital platforms, the so-called big techsfor production, analysis, recommendation and distribution of content was excluded, this Tuesday (3), from the bill that regulates AI in Brazil.
AI systems that can cause harm to people or society are considered high risk. According to the rapporteur, senator Eduardo Gomes (PL-TO), the removal of this section was an agreement between the benches to advance the text in the Senate Special Committee created to analyze the topic.
Bill 2,338 of 2023, authored by the President of the Senate, Rodrigo Pacheco (PSD-MG), determines that AI systems considered high risk must be subject to stricter governance, monitoring and inspection rules.
The text defines a series of cases as high risk, including artificial intelligence systems that impact people’s health and safety, such as in medical diagnosis, or those that can be used to select workers for jobs, for the selection of students at an educational institution or in public services.
The section referring to the big techs stated that a system of “large-scale and significantly automated production, curation, dissemination, recommendation and distribution of content by application providers, with the objective of maximizing usage time and engagement of people or affected groups”.
The advocacy coordinator of the organization Repórter Sem Fronteiras in Latin America, Bia Barbosa, who has been pushing for the approval of the project, assessed that the excerpt was removed due to pressure from digital platforms.
“It makes no sense to have an AI regulation bill that does not address recommendation and content moderation systems, which are high-risk systems. But the platforms, as they do in every country in the world, are significantly opposed to any regulation that may affect their business and, here in Brazil, they have a very significant ally, who are the far-right parliamentarians. ”, highlighted the expert.
Barbosa cited mass misinformation in elections, the Covi-19 pandemic and attacks on democracy through social networks as examples of the damage that platforms’ AI content recommendation systems can cause to people and society.
Despite this change, the vote on the project in the Commission was postponed until next Thursday (5). There was an expectation that the topic would be approved this Tuesday. The postponement occurred because there was no consensus among parliamentarians regarding the sections that require information integrity for AI systems.
Project
The project that regulates artificial intelligence in Brazil also establishes the fundamental principles for the development and use of AI. It defines that technology must be transparent, safe, reliable, ethical, free from discriminatory bias, respecting human rights and democratic values. The project also requires that technological development, innovation, free enterprise and free competition be considered.
In addition to listing AI systems considered high risk, the project prohibits the development of some types of AI technologies that cause harm to health, safety or other fundamental rights.
The project prohibits, for example, public authorities from creating systems that classify or rank people based on social behavior to access goods and services or public policies “in an illegitimate or disproportionate way” or from AI systems that facilitate abuse or sexual exploitation of children and adolescents.
High risk
According to article 14 of the project, high-risk systems include traffic control, water supply networks, electricity and “when there is a relevant risk to the physical integrity of people”.
Also considered high-risk AI systems are those applied in professional education and training to determine access to an educational institution or monitor students, in addition to systems used for recruiting workers or for job promotions.
AI systems for “task allocation and control and evaluation of people’s performance and behavior in the areas of employment, worker management and access to self-employment” are also considered high risk.
Other examples are AI systems for assessing priorities in essential public services, such as firefighters and healthcare. Artificial intelligence systems used by the courts to investigate crimes, or which pose a risk to individual freedoms or the democratic rule of law, are also mentioned in the text.
AI systems in healthcare, such as to assist with diagnosis and medical procedures, and for the development of autonomous vehicles in public spaces are other examples of high-risk artificial intelligence systems listed by the project.