Today: December 5, 2025
February 19, 2025
3 mins read

How to avoid being a victim of scams with voices cloned by AI

How to avoid being a victim of scams with voices cloned by AI

In past days, the Ministry of Government reported on the dismantling of a criminal organization that used artificial intelligence to clone the voice of the Minister of Education, Omar Véliz, and scam people with the promise of false jobs. The criminals operated from the prison and managed to raise at least Bs 5 million of 19 victims.

Voice cloning is an initially designed technology for legitimate uses, such as virtual assistants or audiobook narration. However, its improper use has become a growing concern.

“Before the appearance of this technology, video and audio tests were considered solid evidence in the courts. However, with the ability to manipulate voices so convincing, it has become more complicated to determine if a voice is authentic or if it has been altered through AI, ”says William Llanos, a professor at the Law career at Franz Tamayo University Unifranz.

This problem is not new. In the United States, the Federal Commerce Commission (FTC) reported that in 2022 telephone scams with voice cloning caused losses of up to 11 million dollars, with more than 36 thousand complaints. In another case, a group of cybercriminals used this technology to steal 35 million dollars to a bank.

On the case recorded in Bolivia, the criminal group, composed of seven people, spread messages on social networks and used the minister’s cloned voice to convince the victims of paying an initial amount of Bs 3,500 through QR. Until now, five members of the organization have been identified, of which three are already imprisoned.

These criminals are accused of fraud, concussion, aggravation for multiple victims, criminal action and criminal organization.

Regulations in Bolivia

While the cloning of voice with ia raises a global challenge, in Bolivia there are already laws that protect the voice as part of the right to the image itself.

Article 21 of the State Political Constitution (CPE) recognizes the right of each citizen in their image and voice, guaranteeing compensation and reparation measures in case of violation.

In addition, article 16 of the Civil Code establishes that, if the image or voice of a person is used without their consent so that they injure their reputation, a judge can order the immediate cessation of said activity.

However, there is currently no specific legislation that regulates the use of artificial intelligence in voice cloning for criminal purposes. Llanos suggests that it is urgent to update Bolivian regulations.

“Faced with new technologies, it is necessary that legislators and technology experts work together to develop regulations that address these challenges.”

A possible solution would be the prohibition of the creation and dissemination of cloned voices without consent, in addition to the development of IA -based detection tools to stop their propagation.

How to identify and protect yourself from a false audio?

Given this growing risk, it is essential to learn to recognize the audios generated by AI. Some alert signals include:

  1. Audio quality

An audio made with AI has a low audio quality. Although artificial intelligence tools are constantly improving, it is still common to notice inconsistencies in recording quality.

It is important to listen carefully to changes in the tone, volume or clarity of the voice. Sudden fluctuations can be a sign that audio has been manipulated or artificially generated.

  1. Voice intonation

The voices generated by artificial intelligence still face challenges to precisely replicate the full range of human emotions.

If the speech sounds monotonous or lacks the natural emotion that you would expect in an authentic conversation, it could be a false audio.

It is crucial to pay attention to intonation and unusual pauses in the conversation, since these may be indicative that the voice has been generated by a machine.

  1. Find inconsistencies

The audios responses generated by AI, often, present inconsistencies or responses that make no sense in the context of the conversation.

Although artificial intelligence tools are advanced, they can make logical mistakes that a human would not commit. It is common for false audios to contain absurd or incoherent statements, which may be an indication of fraud attempt.

  1. Confirm identity

If a suspicious audio is received, it is always a good idea to verify the source. You have to corroborate additional confirmation using other media, such as video calls or official social networks.

Legitimate entities generally have multiple forms of contact and will be willing to confirm the veracity of the message. The lack of availability or denial to provide other forms of verification should lift an alarm signal.

  1. Suspect orders

The scammers who use audios made with ia will ask for sensitive information, passwords or immediate payments to their victims. It should always be careful to any message that requests personal or financial data, especially if it is accompanied by a pressure to act quickly.

Fraudulent audios often try to create an urgency feeling to reduce your ability to think critically. Never share personal or financial information just by listening to an audio.

The case of the cloning of the voice of Minister Véliz is a clear example of how the cloning of voice with AI is transforming the panorama of the scams. The lack of awareness about this technology and the absence of specific regulations increase the risk of more people being victims.

It is essential that the population adopt security measures, that the authorities update the laws to address this new crime and that digital education is promoted to reduce vulnerability to these technological threats.

“Criminals 2.0 represent a challenge for security in the digital era. It is crucial to develop preventive strategies to face this type of crimes, ”says Llanos.

Given the growing sophistication of scams with AI, the key to avoid being a victim lies in the information, prevention and adoption of security measures.

Source link

Latest Posts

They celebrated "Buenos Aires Coffee Day" with a tour of historic bars - Télam
Cum at clita latine. Tation nominavi quo id. An est possit adipiscing, error tation qualisque vel te.

Categories

Bolsonaro Defense criticizes PGR denunciation
Previous Story

Bolsonaro Defense criticizes PGR denunciation

ANSES: who receives their salaries this Monday, October 28
Next Story

ANSES: Who charges their assets this Wednesday, February 19

Latest from Blog

Go toTop