Laws for robots must come soon

Alpha 1S: This humanoid robot is an advanced toy whose movements can be programmed using a Smartphone. (© Peter Nicholls / Reuters)

Sylvain MétilleSylvain Métille is lawyer and course leader at UNIL in the Faculty of Law, Criminal Justice and Public Administration.
Virginie Jobé / Allez savoir!
Siri, Aido, Roxxxy: whether machines with augmented intelligence or automatons, robots range from lawn mower to adviser and friend. While these gifted machines have not yet overtaken science fiction, they are coming close to it. Before they start laying down the law, isn’t it time to give them a legal framework? Sylvain Métille, specialist in law relating to data protection and new technologies, says it is.

Who is to blame if an algorithmic wealth management programme results in a client going bankrupt? And whose fault is it if a robotic lawn mower demolishes the neighbour’s field or a robot surgeon botches its operation? What will happen when the Google Car – a driverless vehicle as yet at an experimental stage – has to choose between saving its occupants or a pedestrian in the event of an accident? How should we view the arrival of drones? To date, robots have no legal status in Switzerland. Do today’s laws already need changing? Lawyer Sylvain Métille, with a PhD in Law and course leader at UNIL in the Faculty of Law, Criminal Justice and Public Administration, is convinced that the moment has arrived to think about it. Below he gives an interview.

What is the definition of a robot that merits legal status?

Sylvain Métille: That remains an open question. We have to make a genuine choice in society. Either we think that, despite all the developments, we’re talking about a controllable automaton and that all robots are just glorified toasters that obey the rules and there is therefore no reason to develop new laws, with responsibility falling on the manufacturer, seller or even user; or we consider that robots are machines with a capacity for augmented (a term that has succeeded artificial) intelligence and we must ask to what extent they can take decisions based on instructions that they have not been given. In such a context, it makes sense to create a robot personality or a particular system to deal with it.

What exactly do you mean by robot ‘personality’?

The simple version would be to have, like a legal person, a separate legal entity, which would include the limited responsibility of the robot as an object and/or computerised system. That is to say that the actions of the robot would be attributable to it and no longer to the manufacturer or owner. It would be a person in the legal sense, and would be non-human. And to counterbalance the fact that the owner or manufacturer is cleared of their responsibilities, there could be monetary compensation in the form of insurance. It would be similar to the way the hazards posed by a car are compensated for by the requirement to take out insurance. Or one could draw a parallel with the case of a legal entity where the retailer’s responsibility no longer lies with the individual person but lies with the share capital of the company, which assumes responsibility in the event of non-payment. Within the concept of machine learning, the robot manages to learn, but learns based on what it has been trained to carry out. It is not completely autonomous. There could therefore be a responsibility on the part of the person training it.

Which machines are comparable to a legal person?

Currently there are none in my view. The robotic mower, minesweeper or humanoid Nao cannot be likened to a legal person. Even if they are intelligent, show emotions, legally they remain objects. One could create a particular status for them drawn from that of a legal person. But they would never be one. At this point, we are managing with our current laws, as robots are limited in their capacities. Medical robots, for example, remain tools. The one that assists the surgeon’s hand during an intervention is no more than an advanced scalpel. No robot receives patients in its surgery and alone assesses the appropriateness of an operation. However, when such a robot does exist, the question of its responsibility must be posed.

The Google Car drives itself. Where does that place it in law?

Even if one accepted that all it does is obey pre-established rules, as an automaton, traffic laws need to be amended, because a vehicle must have a driver. Moreover, driving on the highway requires a licence, which the Google Car does not have. Currently a human driver is required to be behind the wheel, so as to be able to retake control at any time. It is simply a form of assisted driving, like cruise control, except more advanced.

Tests have been conducted in Zurich on a similar vehicle.

Google car

Google Cars: These electric driverless cars have already covered more than 3 million kilometres. (© Elijah Nouvelage / Reuters)

Yes, researchers have received temporary, limited authorisation to carry out tests in which the car is supposedly autonomous. But there is still a human being behind it. Some US states have gone further and have allowed a car to travel alone, with no driver inside. This will entail amending the law, if the test phase proves conclusive. There have certainly been accidents. But when it is solely the vehicle’s responsibility, that poses fewer problems, as, to put it on the road, the owner must register it and take out insurance. However, the current legal framework makes no provision for possessing a robot that goes off and does my supermarket shopping. The transition from a vehicle with a driver to an autonomous car will be smoother than that of a robot in a world where it is not foreseen. Nevertheless, as human responsibility is going to be removed, we must legislate for a particular system of vehicle responsibility.

And what if, in the event of an accident, the vehicle has to choose between killing its passengers or a pedestrian?

It will be easier for a robot than for a human being, as it rapidly calculates the probabilities. Another example: two people are walking along the road and it cannot brake in time. Who does it run down – the child or the elderly person? Probably the elderly person who has a shorter life expectancy than the child, despite the fact that the consequences for the person run down are the same. If it drives more slowly and takes account of the resistance of the person who is struck, that may also carry weight in the calculation. Is it less injurious to have a dead or an injured person? Apparently this type of case can be incorporated into a robot’s memory. The difficulty remains in knowing who takes ethical and moral responsibility for such actions. Is it for the manufacturer to do so or must this be written into the rules of the road? For example, driving at more than 50 km/h in villages is illegal. In our example, one could also conceive of an obligation to run down the older person rather than the child.

The Google Car designers have given it an aggressive personality, unlike Philips whose robot vacuum cleaner is intended to be calm and polite. Is this just a gimmick?

No, I don’t see this as a character trait. The vehicles are simply given rules of human behaviour to adopt. The testers noticed that if the Google Car drove too well, from a purely scientific point of view, it was not well received on the road. It needed to behave aggressively to resemble other drivers. If it gave way when it had priority, with the aim of improving traffic flow for example, this was not understood by other road users. Thus the vehicle (like a human being) chooses what is most advantageous for it and not for the traffic as a whole.

This is a rule which does not follow Asimov’s laws, since the robot favours itself to the detriment of the human being.

Yes, but if it chauffeurs its human being and puts him before others, then Asimov remains valid. This poses a whole series of previously unimaginable questions, such as: who do you save first? Is it ethical, politically correct, to have a legislative framework for a traffic situation that sets out who should be run over first? But it is also totally immoral and unacceptable to refuse to ask that question when we know that it will arise.

And how can we legislate for drones?

In Switzerland drones are at present considered more like toasters than robots because they do not fly unaided. Up to 30 kilos, a drone is not subject to any authorisation, but must remain within sight of its owner. It does not have the right to fly within 100 m of a crowd or 5 km of an airport. Public liability insurance is required if the drone weighs more than 500 g. Above 30 kilos, the rules that apply are the same as for a plane. So a framework does exist. And from this year (2017), the Swiss Federal Office of Civil Aviation (FOCA) is offering manufacturers a certification procedure. The drone is viewed as a flying object that carries a risk of disrupting traffic, falling, causing damage and even killing someone. These risks exist independently of knowing whether the drone is autonomous or has a pilot.

Military robots also pose a problem…

Most of them are still tools. They are made to be weapons controlled by humans – like drones. In the beginning, drones were mostly military. In the United States, when civilian drones appeared, ‘unmanned craft’ was the preferred term to avoid parallels with the army. And for good reason: US pilots, based in Nevada, could kill thousands of kilometres away and then leave their command post to return to civilian life. This was seen as profoundly shocking. Moreover, there is the risk of a counterattack not targeting the drone but the command post on the ground. Commercial concerns meant that the name was changed. The deadly autonomous robot poses the problem of the responsibility of the person who programmed it to avoid it committing any errors. In the context of war, it is nonetheless more difficult to exercise one’s rights as a victim. This question is viewed above all from the point of view of human rights, operational transparency and normalisation of acts of war.

Some soldiers become attached to their robotic minesweeper to the extent of giving it a name and burying it. What should we make of this?

Ultimately, it could become a public health problem. Is attachment to a machine a risk that needs assessing and do people, despite themselves, need to be safeguarded against it, like epidemics, radiation or contact with harmful substances? If the robot is moreover ‘disguised’ as human, this may be seen as a form of deception. You’re not going to accept a police surveillance camera under your window. In contrast, you don’t distrust your robot companion. Why not envisage a law that stipulates that robots must not resemble human beings too closely for reasons of mental health? Just as cigarettes, alcohol and drugs are restricted. To complete Asimov’s laws, we could add, from a data protection point of view, obligatory algorithmic transparency on the part of the manufacturer, so as to present the robot as it is, what it is made from, its capacities, whether it involves automatic processing or not and where recorded data goes. It is neither child nor animal and cannot have the same rights. From a global point of view, it seems important to know when you meet someone in the street whether or not he is a robot or if the person phoning me is one. There is a public interest in defining a legal framework that considers it a danger, subject to authorisation.

Why has this not yet been done?

Because the issue is not yet topical enough and does not encourage innovation. Looked at from the other side, on the basis of a precautionary principle, we do not really know what is going to happen with robots and so we want to ban anything that seems out of our control, to have an on/off ‘panic button’ to neutralise it.

David Levy, expert in augmented intelligence, reckons that in 2050, we will be marrying robots. Do you believe that?

I hope not! Legally, it is like buying a house: giving a long-term commitment in the eyes of society to manage an object. The two-way aspect of rights and obligations does not exist. The house cannot expect anything of me. The concern lies in making the human being responsible for obligations from which, in reality, no one benefits if the machine is still considered as such. We would then fall into a situation of alienation which the law does not permit. In contrast, if you consider that the ‘emo robot’ (which ‘has’ emotions) has a real personality, it has to have the same rights we do. It’s a worrying scenario. Just as we have laws on genetic research that restrict actions to avoid having babies for research or to provide replacement organs, etc., perhaps we should also legislate now on what is and is not acceptable in the development of robots.

The three laws of Asimov

Most test laws on robots are based on a science fiction novel by American author Isaac Asimov. Entitled Runaround, it appeared in the magazine Astounding Science Fiction in 1942. In it, the author sets out his ‘Three Laws of Robotics’:

  • Law 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Law 2: A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • Law 3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Over his writing career, Asimov refined the laws slightly, but in the main they remain the pillar of current draft legislation and are commonly referred to as ‘Asimov’s Laws’.

Share...Email this to someoneShare on LinkedInTweet about this on TwitterShare on FacebookShare on Google+

Comments are closed.