The public use of an advanced chatbot opened the debate on artificial intelligence

The public use of an advanced chatbot opened the debate on artificial intelligence

The proliferation of this type of program brings a new debate about the consequences of the massive application of these technologies / Photo archive.

The opening to the general public at the end of last year of ChatGPT, one of the most advanced artificial intelligence chatbots (AI) that can write from academic essays to create a symphony, brought a new debate on the scope of these conversational programs that seek to imitate human capabilities due to the bias of their data, the use of technology in few companies and the impact on education and work.

The Open AI company created a website in November that allowed users to experiment with ChatGPTso in a few days he managed to get millions of people to interact with this artificial intelligence, which works by answering questions and uses more than 175 million parameters.

Thus, users were sharing on social networks the challenges they gave the chatbot: solving a programming problem, creating an academic essay comparing two theories, writing a game script or creating a musical score.

What surprised me the most about this AI chat was the ability to give correct and complete answerswith a large amount of vocabulary and information and taking the words in context.

However, specialists in the subject assured that the veracity and bias of the data it provides must be analyzed and called “not to be blinded” by this technology.

“He drew a lot of attention for his performance, he solves a lot of tasks. He is trained to converse with human beings and he is learning. Tis aware of the context and generates ‘understanding’, but this is a metaphor to refer to a computer since it is only a capacity that the human brain has,” Fernando Schapachnik, executive director of the Sadosky Foundation, which is under the orbit of the Ministry of Science, Technology and Innovation, told Télam.

“This AI differs from previous models because nobody told it anything in advance, nobody wrote the ad hoc rules. Here it was given a series of unstructured data and the system did what we call ‘learn’: it inferred what a contract or a play and automatically built the rules”Fernando Schapachnik

For Schapachnik, “this AI differs from previous models because nobody told it anything in advance, nobody wrote the ad hoc rules. Here it was given a series of unstructured data and the system did what we call ‘learn’: it inferred what a contract or a play and automatically built the rules”.

“But all the details of how it is being implemented are not yet known,” he added. the specialist who detailed the questions regarding the operation of this AI: “We do not know if the data it provides is protected by licenses or what biases it has with respect to content moderation,” he described.

In addition, the fact that it appears on a website for the whole world “is a temporary matter; a campaign to spread it, to improve the model, but I don’t think it will continue as a free version in the future,” he said.

What surprised me the most about this AI chat was the ability to give correct and complete answers Photo 123RF
What surprised me the most about this AI chat was the ability to give correct and complete answers / Photo 123RF.

This week, Australian musician Nick Cave criticized the AI ​​program after the disclosure of a system-generated song that “mimicked” his songwriting style.

“The songs arise from suffering, which means that they are based on the complex internal human struggle of creation and, as far as I know, the algorithms have no feelings,” the composer said on his website and assured that it was about ” a grotesque mockery”.

The application of technology and its consequences

For her part, Laura Alonso Alemany, professor of Computer Science at the National University of Córdoba (UNC) and member of the Vía Libre Foundation, He called “not to be blinded” with the capabilities of this technology.

“We were already maturing technologies like this and this one is a little better than the others. It is armed with data that we put on the web, which we continue to feed with our questions. We are teaching it. It can do fabulous things, but in the same way it can err “, he asserted in dialogue with Télam.

“The danger is that people think that what these technologies say is the only truth”Laura Alonso Alemany

For Alonso Alemany “the danger is that people think that what these technologies say is the only truth” and warned that AI chats “can normalize many things that as a society we are leaving behind, such as racism or xenophobia because they are working with historical data.

The proliferation of this type of program brings a new debate about the consequences of the massive application of these technologies about jobs and education.

At the work level, these models will help to get rid of the most mechanical and repetitive issues Photo Leo Vaca
“At the work level, these models will help get rid of the most mechanical, the most repetitive issues” / Photo: Leo Vaca.

Schapachnik considered that it is a “threat” to the world of workers since “there are more jobs that can be automated than we thought, the most repetitive tasks are the ones that are in danger.”

“At the work level, these models are going to help get rid of the most mechanical, the most repetitive issues, but the problem is that there are going to be people who are going to lose their jobs. For every more creative and interesting job, they leave to lose another 10”, assured Alonso Alemany.

The UNC researcher asserted that, with regard to education, “there are going to be evaluation problems, since institutions today do not have the time or training to adapt to these new technologies, which is a risk.” .

“At the work level, these models are going to help get rid of the most mechanical, the most repetitive issues, but the problem is that there are going to be people who are going to lose their jobs. For every more creative and interesting job, they leave to lose another 10”Laura Alonso Alemany

Lastly, the specialists reflected on the concentration of the AI ​​marketsince Microsoft invested more than 1,000 million dollars in OpenAI for the development of ChatGPT and other products.

“It is a development made by a private company that seeks a profit, they let you use it for a little while. They want us to become dependent on a technology that they are going to charge us for,” said Alonso Alemany and criticized that few companies handle this technology.

“There are few companies that can develop these AI models, which are based on large amounts of data and calculations. The more dependent we are, the more concentrated it is,” he asserted.

Along the same lines, Schapachnik highlighted that AI uses “huge computing centers that consume a lot of energy and that costs money; one should not think of altruism in this matter because there is a very strong investment.”



Source link

Previous Story

Covid-19 caused more deaths in pregnant and puerperal women, reveals study

Next Story

Official: River Plate loan Alex Vigo to Red Star

Latest from Argentina