At a public hearing in Brasília (DF), this Wednesday (22), researchers and members of civil society organizations expressed their opposition to the new policies of the company Meta, which changed the forms of moderation and which even allow the publication of prejudiced content. Representatives of digital platforms were invited, but did not attend. The company controls the Facebook, Instagram and Whatsapp networks.
At the public hearing, carried out by the Attorney General’s Office (AGU), the researchers drew attention to the fact that these policies increase the difficulties of already vulnerable groups. Professor Rose Marie Santini, director of the internet studies laboratory at the Federal University of Rio de Janeiro (UFRJ), stated that the company’s decisions to remodel fact-checking programs and relax moderation work on the formation of hate speech represent a threat to society.
For her, a very significant change announced by Meta’s president, Mark Zuckerberg, was about changes to algorithms, when deciding which voices will be disclosed and silenced. “These algorithms, programmed by content curation and moderation, operate without any transparency about reality and its criteria. We don’t know which content is effectively moderated”, he pondered.
The professor states that the disclosure of the moderation criteria demonstrated “serious inconsistencies”. “This opacity undermines public trust in the company’s real concern for freedom of expression. After all, freedom is only effective when accompanied by transparency”, he argued.
For the researcher, this type of moderation allows freedom to be given only to people chosen by the company. “The companies’ speech leads to an understanding that censorship could only come from the State. However, in the current reality, digital platforms constitute the main structure for censorship of users on the internet”.
She understands that these large platforms have more information about their users than any State has about its citizens. “(Companies) use people’s data, including sensitive data, to distribute personalized advertisements, regardless of whether they are legitimate or not, whether they contain crimes of any kind or whether they put users at risk.”
Sexist content
Law professor Beatriz Kira, from the University of Sussex, in the United Kingdom, assessed that the priority and engagement of platform algorithms contribute to the dissemination of sexist and misogynistic content that would not have the same impact if it were not for the internet. “Emerging technologies with generative artificial intelligence have generated this scenario, facilitating new forms of violence.”
She cites the dissemination of intimate content, such as deep nudes, which highlights the strategic use of obituary to reinforce gender-based violence in the political sphere. “In this context, recent changes to hate speech policies and the call for content syndication automation are deeply concerning. These changes highlight the urgent need for a more active role for the State in regulating digital platforms.”
Attention to children
The director of policies and children’s rights at Instituto Alana, Pedro Hartung, highlighted that content moderation by platforms to protect children and prevent violence is not only a necessity, but also a constitutional duty. “In the case of children, we already have legislation to base actions of objective liability for their own conduct or actions due to omission on the part of the platforms”, he argued.
Hartung contextualized that 93% of children and teenagers use the internet in Brazil, 71% WhatsApp, in addition to a significant share of Instagram and TikTok. “This is an internet that is not a public square, but rather a shopping mall, which seeks an economy of attention, the commercial exploitation of children,” he explained.
He exemplified that, as part of this harmful content, there were attacks in schools in Brazil, mainly in 2023. He mentions that research by the State University of Campinas (Unicamp) managed to evaluate the influence of the online world on the radicalization of these teenagers. “It is extremely important for us to look at moderation of conduct on platforms.”
Another topic of concern in relation to childhood, according to Hartung, is the significant impact of advertising and also the growth of artistic child labor on the networks. “It is important to emphasize that the blame cannot be placed exclusively on families, but on companies.”
Violence against homosexuals
The president of the Brazilian Association of Lesbians, Gays, Bisexuals, Transvestites, Transsexuals and Intersexes, Victor De Wolf, also participated in the public hearing, stating that Meta’s monitoring policy was already mistaken and intolerant. “We already see serious hate crimes happening, rapes, slander and scams. Our community is not unusual.”
In the Meta text, which points to a new moderation policy, there is textual information that there would be permission to relate mental illness to issues of gender or sexual orientation.
“We are still a country that most persecutes the LGBT community, and especially transvestites and transsexuals in the world. We are still the country with the most killers in any rights relationship”, he contextualized. For him, it is necessary for justice to play the role of holding networks that violate citizens’ rights accountable. “The digital anarchy proposed by this group of businesspeople is, in fact, nothing more than a dictatorship,” he said.