Two cases to open the debate: a Google engineer assured that an artificial intelligence system became conscious by “thinking and feeling” like a person, while a robotic arm that played chess with a seven-year-old boy broke his finger during the game, in Russia. But no, they are not robots that “come to life”. To answer the question of whether artificial intelligence (AI) can develop its own consciousness, it is first necessary to define what artificial intelligence (AI) is. consciousness and, even more, what is intelligence.
Specialists spoke with Télam to bring clarity to an issue that includes what technological, technical, philosophical and ethicalclearing the influences of science fiction movies.
The debate is under construction but there is a very clear question and that is that “Any machine (be it a robot or not) from a blender onwards, must have defined security measures“, assured Télam Marcela Riccillo, professor of the Computer Engineering career and the Master’s degree in Data Sciences of the Technological Institute of Buenos Aires (ITBA).
The examples help to understand a complex scenario and clear from certain technologies the so-called “hype”, English expression that refers to a feeling of exaggeration about something. It sounds tempting to speak of the “awakening of the machines”, although reality responds to other less “blockbuster” issues.
Two cases to open the debate
Blake Lemoine was working months ago as a software engineer at Google when he noticed that an artificial intelligence system had become aware. He was referring to a conversation technology called LaMDA (Language Model for Dialogue Applications, language model for dialog applications), with whom Lemoine exchanged thousands of messages. From them, the engineer considered that this system could speak of his “personality, rights and desires”, so it was a person.
Meet LaMDA. Advances in natural language understanding are making it possible for us to build even more conversational AI. #GoogleIO pic.twitter.com/HJ8FgHtbFp
—Google (@Google) May 18, 2021
LaMDA trains on large amounts of text, then finds patterns and predicts sequences of words.
After Lemoine’s statements, the company suspended him and later fired him, alleging that he had violated the confidentiality policies, according to the specialized press. “Our team – which includes ethicists and technologists – has reviewed Blake’s concerns against our AI Principles and advised him that The evidence does not support their claims.”Brian Gabriel, a spokesman for the technology giant, said in a statement.
On the other hand, a chess-playing robot broke seven-year-old boy’s finger during a game at the Moscow Chess Open last week.
The president of the Moscow Chess FederationSergey Lazarev said the incident occurred after the boy “rushed” before the robot made its move. The boy then continued to compete in the tournament, in a cast for the fracture. The video showing the incident went viral on the networks.
During a chess competition a robot breaks a child’s finger https://t.co/PSLZnsAMAv pic.twitter.com/HfNpkmivIO
– Publimetro Mexico (@PublimetroMX) July 26, 2022
Robots with feelings? Artificial Intelligence with consciousness?
The problem of consciousness “is interesting from the philosophical point of view and, more than once, fueled by the sci-fi we grew up with. In the same way that we do not agree on what intelligence is, we do not have precision on what is consciousness? explained to Télam Vanina Martinez, director of the Data Science and Artificial Intelligence team of the Sadosky Foundation and researcher of Conicet.
“We have criteria or parameters for guessing what characteristics a conscious being has to have based on models of things we think are conscious. But we don’t know how it occurs, even in humans. The question we should be asking ourselves is whether, regardless of whether they are aware, They’re safe “held.
The researcher stated: “It seems to me that many times the actors involved in the creation of these technologies get lost in blockbuster discussions that ignore what is important and urgent. On the other hand, it seems that giving it a conscious character, in some way, removes much of the responsibility on the actions of those systems to those who built it and deposit it in the system itself”.
We will participate in the space for debate on the opportunities and impacts of #Artificial intelligence in the country, with Vanina Martínez (@VaniMartinez82), our director of #DataScience #AI. See you tomorrow at 9:30 pm! https://t.co/yGfvEpIDoi
— Sadosky Foundation (@funsadosky) March 15, 2022
The researcher and teacher of the Computer Engineering career at the ITBARodrigo Ramele, told Télam: “To say that an AI that manifests consciousness is today still part of the ‘hype’ associated with Artificial Intelligence”.
“As what happens with Intelligence, the idea of ’consciousness’ is complex to define, and is presented as a ‘hallmark’, a point of intelligence so high where a ‘being’ can be identified as unique, different from others. others, that he can be located in a moment and in a place, that he perceives his surroundings and the state of his situation. We still don’t know enough, neither in animal ethology nor in neuroscience nor in artificial intelligence to clearly define and answer this question”expanded the specialist.
Ramele realized that there is a branch of the Deep Learning (Deep Learning, a branch of AI) very successful, natural language processing, which has recently allowed the development of AIs that operate the symbology of natural languages extraordinarily well, and that allow complex dialogues to be established.
“The training that is done on these Agents also includes dialogues where they talk about consciousness (dialogues between humans). Therefore, this AI is able to recombine these symbols and at one point give the impression, especially in relation to entry and exit, that he ‘knows what he is talking about'”.
A good analogy for this phenomenon, he mentioned, is the “Vaucanson Duck”: a mechanical duck from a 19th century illusionist who gave the impression that it digested food just as living beings do, because at the level of input and output, at the level of superficially visible behavior, it operated in the same way that a being can operate alive (eats food, digests it, and excretes it). “However, even from our still unknown, what we define today as consciousness involves many more parts that are more complex to verify”complete.
The robot that plays chess
The case of the baby with the robot in the chess game “involves a manipulator with active actuators (i.e., motors). The manipulators are the typical industrial robots that have degrees of freedom and versatility of movement that have been used for 40 years in the assembly lines of the world’s automated factories”, explained Ramele.
He said that “it is a well-known problem since any type of error, which can be mechanical, electronic, programming or even perceptual (a wrong interpretation of information from the sensors) can lead to a movement that harms a person. this factories usually have exclusive spaces where manipulators operate and where people cannot enter directly,” he added.
What happened with the robotic arm, was it an accident? “I wouldn’t define it that way” Martinez emphasized. Because “if we say that it is an accident, we are assuming that it is something that can happen and, in some way, we transfer responsibility to the user, and therein lies the problem.”
“Perhaps something more appropriate would be to talk about negligence, although there are no specific regulatory frameworks that stipulate the limits or guarantees that must be required for this type of system. We don’t have clear guidelines on how to build them and how much to allow them to do,” the researcher added, reflecting: “As more of these episodes appear, a question that begins to arise is: If we don’t know if we can control them, should we build them?”
The state of Artificial Intelligence
“Artificial General Intelligence (AGI) does not exist. Therefore, the machines have no intentionneither desire, nor conscience, nor responsibility”, synthesized Riccillo.
“There is an interesting advance in the development of the ethics of AI. We have reached a moment where many institutions, countries and organizations, worldwide, have agreed on, more or less, what are the Ethical principles that these systems should promote and respect. One of the problems is that even those principles are very abstract”Martinez commented.
From a more technical point of view, he explained, “in order to guarantee some degree of these principles, we want systems are understandable, predictable. That they can, not only the system but all the actors involved in its creation, development and deployment, be accountable for the actions and decisions they make.”
“We want that when necessary, they are explainable, that is, they can explain to whoever is using it, why they are doing what they are doing. Of course there are systems where this is not necessary, but those that interact directly with humans, such as a robotic or virtual personal assistant, are critical,” remarked.
The researcher stated that the European Unionfor some years, “has been trying to outline a regulatory framework for AI focused on the level of risk that these systems imply: a kind of traffic light that indicates what can never be done, what can be done but with a lot of controls, audits, various impact assessments, etc., and which systems, in principle, do not pose any risk”.
However, he added, “there is a lot of disagreement on the approach and content of the projectespecially to guarantee that there is a balance between technological progress and social welfare, and that this type of law does not harm, by imposing an impossible cost, companies that provide this type of service that are not large multinationals, and that continue to concentrate the technological oligopoly even more”.