Today: November 15, 2024
January 14, 2023
5 mins read

Who wants to regulate artificial intelligence?

Who wants to regulate artificial intelligence?

In the winter of 2016, the home automation manager Google Nest made an update of software of your thermostats which caused damage to the batteries. A large number of users were left disconnected, though many were able to change batteries, buy a new thermostat, or wait for Google to fix it. The company indicated that the failure would have been caused by the artificial intelligence (AI) system that managed these updates.

What would have happened if the majority of the population used one of those thermostats and the failure left half the country exposed to the cold for days? A technical problem would have become a social emergency that would have required state intervention. All because of a faulty artificial intelligence system.

No jurisdiction in the world has developed a comprehensive and specific regulation for the problems generated by artificial intelligence. This does not mean that there is a complete legislative vacuum: many of the damages that artificial intelligence can cause have other ways of responding.

For example:

  1. To accidents caused by autonomous cars insurance will continue to be the first recipient of claims.

  2. Companies that use artificial intelligence systems for your job selection processes may be sued in case of engaging in discriminatory practices.

  3. Insurers that engage in anti-consumer practices derived from the analysis generated by their artificial intelligence models to set prices and decide who to insure will continue to have to answer as companies.

In general, other regulations that already exist –such as contract law, transportation, damages, consumer law, even regulations for the protection of human rights– will adequately cover many of the regulatory needs of artificial intelligence.

Generally doesn’t seem like enough. There is some consensus that The use of these systems will generate problems that cannot be easily solved in our legal systems.. From the diffusion of liability between developers and professional users to the scalability of damages, AI systems defy our legal logic.

For example, if an artificial intelligence finds illegal information on the deep web and makes investment decisions based on it, should the bank that manages the pension funds or the company that creates the automated investment system be held accountable for those illegal investment practices? ?

If an autonomous community decides to incorporate a co-payment for medical prescriptions managed by an artificial intelligence system and that system makes small errors (for example, a few cents on each prescription), but which affect almost the entire population, who is responsible for the lack of initial control? The administration? The contractor installing the system?

Towards a European (and global) regulatory system

Since the presentation in April 2021 of the European Union regulation proposal for the regulation of artificial intelligence, the so-called AI Actthe slow legislative process has been launched that should lead us to a regulatory system for the entire European Economic Area and, perhaps, Switzerland, by 2025. The first steps are already being made with state agencies, which will exercise part of the control over systems.

But what about outside the European Union? Who else wants to regulate artificial intelligence?

On these issues we tend to look at the United States, China and Japan, and we often assume that legislation is a matter of degrees: more or less environmental protection, more or less consumer protection. However, in the context of artificial intelligence, it is surprising how different the visions of legislators are.

USA

In the United States, the fundamental legislation on AI is a norm of limited substantive content, more concerned with cybersecurity that, instead, refers to other indirect regulatory techniques, such as the creation of standards. The underlying idea is that the standards developed to control the risk of artificial intelligence systems will be voluntarily accepted by companies and will become their standards. de facto.

In order to maintain some control over these standards, instead of leaving it to the discretion of the organizations that normally develop technical standards and are controlled by the companies themselves, in this case the AI ​​systems risk control standards are being developed by a federal agency (N.I.S.T.).

The United States is thus immersed in a process open to industry, consumers, and users to create standards. This is now accompanied by a White House draft for a AI Bill of RightsAlso voluntary. At the same time, many states are trying to develop specific legislation for certain specific contexts, such as the use of artificial intelligence in job selection processes.

China

China has developed a complex plan to not only lead the development of artificial intelligence, but also its regulation.

To do this, they combine:

  1. Regulatory experimentation (certain provinces may develop their own regulations to, for example, facilitate the development of autonomous driving).

  2. Development of standards (with a complex plan that covers more than thirty subsectors).

  3. Hard regulation (for example, of recommendation mechanisms on the Internet to avoid recommendations that could alter the social order).

For all these reasons, China is committed to regulatory control of artificial intelligence that does not impede its development.

Japan

In Japan, on the other hand, they do not seem particularly concerned about the need to regulate artificial intelligence.

Instead, they trust that their tradition of partnership between the state, companies, workers and users will prevent the worst problems that artificial intelligence can cause. At the moment they focus their policies on the development of society 5.0.

Canada

Perhaps the most advanced country from a regulatory point of view is Canada. There, for two years, All artificial intelligence systems used in the public sector must undergo an impact analysis that anticipates their risks.

For the private sector, the Canadian legislature is now discussing a standard similar to (although much more simplified) than the European one. A similar process was started last year in Brazil. Although it seemed to have lost momentum, it can now be rescued after the elections.

From Australia to India

Other countries, from Mexico to Australia, passing through Singapore and India, are in a situation of waiting.

These countries seem confident that their current rules can be adapted to prevent the worst damage that artificial intelligence can cause and allow themselves to wait and see what happens with other initiatives.

Two parties with different visions

Within this legislative diversity, two parties are being played.

The first, among the supporters that it is too soon to regulate a disruptive technology –and not well understood-– such as artificial intelligence; and those who prefer to have a clear regulatory framework that addresses the main problems and at the same time creates legal certainty for developers and users.

The second game, and perhaps the most interesting, is a competition to be the regulator de facto global level of artificial intelligence.

The commitment of the European Union is clear: first create rules that bind anyone who wants to sell their products in their territory. The success of General Data Protection Regulationwhich is today the global reference for technology companies, encourages the European institutions to follow this model.

Faced with them, China and the United States have chosen to avoid detailed regulations, hoping that their companies can develop without excessive restrictions and that their standards, even voluntary, become the reference for other countries and companies.

In this, time plays against Europe. The United States will publish the first version of its standards in the coming months. The European Union will not have applicable legislation for another two years. Perhaps the excess of European ambition is going to have a cost, inside and outside the continent, creating rules that when they come into force have already been surpassed by other regulations.

Jose-Miguel Bello and VillarinoResearcher Fellow Automated Decision-Making and Society ARC CoE / Diplomat (on leave), University of Sydney

This article was originally published on The Conversation. read the original.



Source link

Latest Posts

They celebrated "Buenos Aires Coffee Day" with a tour of historic bars - Télam
Cum at clita latine. Tation nominavi quo id. An est possit adipiscing, error tation qualisque vel te.

Categories

They denounce that common criminals threatened the political prisoner Bernardo Ramos
Previous Story

They denounce that common criminals threatened the political prisoner Bernardo Ramos

The US Embassy in Nicaragua warns of possible scams with the 'parole'
Next Story

The US Embassy in Nicaragua warns of possible scams with the ‘parole’

Latest from Blog

Go toTop