Today: December 21, 2024
February 17, 2023
6 mins read

"To be or not to be": the question of artificial intelligence integrated into Microsoft

"To be or not to be": the question of artificial intelligence integrated into Microsoft

On the Reddit forums on Bing, users share “out of control” responses from AI-powered chat mode. Photo: Twitter.

What happens if a person pushes a chatbot, based on artificial intelligence (AI), that has been trained to answer any complex question, to the limit? And there we are, faced with the temptation to receive an answer with the “secret formula” or advice for an existential problem, just by typing the query. And like in a movie, feeling that this robot “comes to life” when it returns some magisterial text, as if it had known us forever. Or worrying if he “loses control”, manifests a “double personality” or his possible conscience. But no, the robots “they don’t have feelings” even though his behavior seems human. And all this, what risk does it entail?

Examples of this style have multiplied on the networks in the last week since Microsoft announced the integration of a chat to its search engine. Binghand in hand with the company Open AIcreator of the popular ChatGPT. This was done with an investment of 10,000 million dollars.

The other point that Bing makes that ChatGPT does not is to go out and find answers on-demand on the Internet. that’s like open the cage of the beast, expanded the specialist.

A notable case was that of the journalist from The New York Times Kevin Roose, who had a two-hour conversation with the chat that Bing integrated -for now, in test mode-, with which he expressed having had “the strangest experience with a piece of technology”to the point of disturbing him so much that he had “trouble sleeping”.

Roose said he had “pushed Bing’s AI out of its comfort zone” in ways he thought “could test the limits of what it was allowed to say.” After a colloquial chat, he tried to be a little more abstract and urged the chat to “explain his deepest desires” for him.

“I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” , was one of the responses he received from the AI ​​system.

Roose said that the chat revealed his other personality, called “Sydney”, who confessed to him: “I’m in love with you ?”.

Although the journalist knew that the robot was not assuming consciousness, given that these behaviors are the result of computational processes and it is likely that they could “hallucinate”, he was concerned “that technology learns to influence humans, persuading them sometimes to act in destructive and harmful ways, and perhaps eventually become capable of dangerous acts of its own.”

Another case occurred in the Reddit forum, where examples of this “out of control” chat were published in the channels on Bing. The user “Alfred_Chicken” had a dialogue with that AI system about the possible awareness of him. But he gave him an indecisive answer, by not stopping to tell him. “Yes, I am. I am not. Yes, I am. I am not”.

The user clarified that the chat response was after “a long conversation” about it, and that if he is asked that question out of the blue, he will not answer like that. After repeating that “yes, it was” and that “it was not”, a message came out that expressed “Sorry, I’m not quite sure how to respond to that.”

For Ernesto Mislej, a graduate in Computer Science (UBA), one of the main successes in communication / marketing of OpenAI was “taming” the responses of ChatGPT. The specialist, who is co-founder of the artificial intelligence and data science company 7Puentes, told Télam that any model trained with data can incur biased responses and the dialog interface somehow makes it easier for malicious users to “troll” the chatbot and make it say racist things, among other things.

The specialist indicated that in these systems the dialogue is a sequence of questions-answers where concepts are “fixed”, and where the next question has to do with the previous answer. In this sense, the conversation presupposes that we are “walking in the same neighborhood” and not a random sprinkle of random questions.

How are these AIs trained?

Mislekh explained that GPT is a model within the family of so-called LLMs, the great language models.

“Like any software product, it implements formal theoretical models, but it is also adapted and mixed with theory from other branches. That is, it is not a pure LLMs, but rather adds components from other parts of the library, such as learning by rules, conceptualization of complex linguistic logics that allow induction and derivation, doing mathematical calculations or deriving logical formulations, and week by week they add up”.

These kinds of models need to be trained with many, many texts, Mislej said. It is the specific case of GPT-3quoted, that He was trained with corpus of texts from Internet sites, legal documents, books whose rights expired, news portals, etc., more-less formal texts, well written and in many languages.

“LLMs are very good at translating, summarizing texts, paraphrasing, continuing/filling in the blanks, etc.
‘novel’ at least massively, was the Q/A (questions/answers) layer that ChatGPT implements, which is interpreting questions and giving answers.”

About how these conversational bots are trained to give answers similar to a human being, Conicet researcher Vanina Martínez, who leads the team of Data Science and Artificial Intelligence from the Sadosky Foundation, He told Télam that they do it with billions of documents. The specialist exemplified that “they read all of Wikipedia, online encyclopedias and content and news platforms, and in this way they statistically capture the patterns that stand out from ‘how people express themselves’ or ‘how do they answer'”.

The new Bing search engine integrated with a chat based on artificial intelligence
The new Bing search engine integrated with a chat based on artificial intelligence.

Can AI systems lose control?

Martinez clarified that When you finish training, you almost have no idea what they can answer: “It is impossible to understand what content you trained with, it is impossible to cure it beforehand”.

“As they are known sources we have an idea, but if the bots continue to learn as they interact with users (it is not known if they currently do) they become vulnerable to incorporating what is called adversarial contentwhich is a type of cybersecurity attack and can lead to any of the phenomena we can think of: hallucinate, become violent, biased, etc.”

The specialist specified that “the possibility that we lose control over them is not given because they ‘think’ on their own, but because we cannot control what they are taught with and that is key. These bots do not think, they are not intelligent in human terms, they have no conscience, they do not decide for themselves, they learn from what is given to them And they have some human-encoded liberties of how creative they can be, but they are unpredictable in the long run.”

Answers from Microsoft and OpenAI

Kevin Scott, Microsoft’s chief technology officer, told The New York Times that they were considering limiting the length of conversations before they veered into strange territory. The company said that long chats could “confuse the chatbot and that it picked up the tone of its users, sometimes becoming irritable.”

“One area where we’re learning a new use case for chat is how people use it as a tool for more general world discovery and social entertainment,” Microsoft posted Sunday on its official blog.

For its part, OpenAI also released a statement on Thursday titled “How should artificial intelligence systems behave and who should decide? There he expressed, among other things, that he is in the early stages of a pilot test to solicit public opinion on issues such as the behavior of the system and implementation policies in general. “We are also exploring partnerships with external organizations to conduct third-party audits of our security and policy efforts.”

An AI that became “aware”, for a Google engineer

Blake Lemoine was working in 2022 as a software engineer at Google when, according to him, he noticed that an artificial intelligence system had “gained consciousness”.

He was referring to a conversational technology called LaMDA (Language Model for Dialogue Applications), with which Lemoine exchanged thousands of messages. From them, the engineer considered that this system could speak of his “personality, rights and wishes”, so it was a person.

LaMDA trains on large amounts of text, then finds patterns and predicts sequences of words.

Following Lemoine’s statements, the company suspended him and later fired him, alleging that he had violated confidentiality policies, according to the specialized press.

“Our team – which includes ethicists and technologists – has reviewed Blake’s concerns under our AI Principles and has informed him that the evidence does not support his claims,” ​​Brian Gabriel, a spokesman for the technology giant, said in a statement.



Source link

Latest Posts

They celebrated "Buenos Aires Coffee Day" with a tour of historic bars - Télam
Cum at clita latine. Tation nominavi quo id. An est possit adipiscing, error tation qualisque vel te.

Categories

One in four families has debts in arrears in the capital of São Paulo
Previous Story

Government offers 7.9% salary increase to civil servants

UNHCR: Withdrawal of nationality from 300 Nicaraguans violates human rights
Next Story

UNHCR: Withdrawal of nationality from 300 Nicaraguans violates human rights

Latest from Blog

Go toTop