Over 150 experts in AI, robotics, commerce, law and ethics from 14 countries have signed an open letter denouncing the European Parliament’s proposal to grant personhood status to intelligent machines. The EU says the measure will make it easier to figure out who’s liable when robots screw up or go rogue, but critics say it’s too early to consider robots as persons – and that the law will let manufacturers off the liability hook.
Under a proposed EU law, humanoid robot Paolo Pepper, created by Italy’s Luca Vescovi, could eventually be considered an “electronic person”. Image: AP
This all started last year when the European Parliament proposed the creation of a specific legal status for robots:
so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.
The parliament said the law would apply to “smart robots”, which it defined as robots having the capacity to learn through experience and interaction, the ability to acquire autonomy through its sensors, and the capacity to adapt its behaviour and actions to the environment, among other criteria.
By virtue of this proposal, the EU is responding to rapid advances in robotics and AI, and the potential risks imposed on humans and human property. The fear isn’t a robot uprising (at least not yet), but more mundane risks, such as autonomous vehicles and drones accidentally smashing into people, a factory robot crushing an absent-minded worker, or a Roomba giving your cat an unexpected shave.
As we venture into this brave new world of ubiquitous robotics and AI, it’s an open question as to who will be liable for these sorts of mishaps. Should we blame the manufacturer? The owner? The bot itself? Or should it be some combination of these? The EU is understandably worried that the actions of these machines will be increasingly incomprehensible to the puny humans who manufacture and use them. The resulting “black box”, it is argued, will preclude us from understanding what exactly went wrong and who should be liable.
Electronic personhood, the EU Parliament believes, is the solution to this problem. To be clear, the EU doesn’t want to imbue robots and AI with human rights, such as the right to vote, the right to life, or the right to own property. Nor is it wanting to recognise robots as self-conscious entities (thank goodness).
Rather, this measure would be similar to corporate personhood – an agreed upon legal fiction designed to smooth business processes by giving corporations rights typically afforded to actual persons, namely humans. It’s similar to recent efforts in which parts of the natural world, things such as rivers and forests, are also being granted personhood status, also for legal reasons.
Electronic personhood would turn each smart robot into a singular legal entity, each of whom would have to bear certain social responsibilities and obligations (exactly what these would be, we don’t yet know).
Under this provision, liability would reside with the robot itself. We wouldn’t be able to throw a machine in gaol, but we could require all smart bots to be insured as independent entities. As noted in Politico, the funds for a compulsory insurance scheme could be provided by the wealth the robot accumulates over the course of its lifetime (that is, if the robot is being used to accumulate wealth, such as a factory robot).
The EU says electronic personhood is not about granting human-equivalent rights to smart robots and AI, but rather the introduction of a special legal designation that recognises them as a special class of machines – but one requiring human backing.
If this sounds confusing (that is, how can a standalone, independent entity still require “human backing”), it’s because it is. The EU proposal is sufficiently vague on many of the details, but if it’s anything like corporate personhood, it could introduce an array of complications. In Australia, for example, corporate persons can sue or be sued, enter into legal contracts, and be regulated at the level of a single entity – while at the same time protecting the individual owners and employees from liability. Does that mean, therefore, that manufacturers and owners of robot persons would likewise be absolved?
It’s for this reason, among many other concerns, that 156 experts felt the need to sign an open letter to the European Commission responsible for the proposal. The signatories of the letter, including legal expert Nathalie Nevejans from the CNRS Ethics Committee, AI and robotics professor Noel Sharkey from the Foundation for Responsible Robotics, and Raja Chatila, the former president of the IEEE Robotics and Automation Society, agree that laws are required to keep humans safe in era of sophisticated machines.
But they take exception to the claim that it will be impossible to prove liability when self-learning, autonomous machines do something bad.
“From a technical perspective, this statement offers many bias [sic] based on an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by Science-Fiction and a few recent sensational press announcements,” write the signatories in the letter.
The authors also say it’s inappropriate to base electronic personhood on either preexisting legal or ethical precedents.
“A legal status for a robot can’t derive from the Natural Person model, since the robot would then hold human rights, such as the right to dignity, the right to its integrity, the right to remuneration or the right to citizenship, thus directly confronting the Human rights. This would be in contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms,” the authors claim.
“The legal status for a robot can’t derive from the Legal Entity model [either], since it implies the existence of human persons behind the legal person to represent and direct it. And this is not the case for a robot.”
Kate Darling, an expert in robot ethics at Harvard University who wasn’t involved with the open letter, said it doesn’t make sense to give robots electronic personhood status at this time.
“First of all, it sets the wrong incentives for manufacturers,” Darling told Gizmodo. “Second of all, I don’t understand this hand-wringing about unpredictable behaviour being a new and unsolvable problem. Your cat makes autonomous decisions, too, but we do not hold the cat legally responsible for its actions.”
Seth Baum, a researcher with the Global Catastrophic Risk Institute – also not affiliated with the open letter – believes in principle that robots and intelligent machines have the potential to merit personhood, and he said he’s “pleasantly surprised” that governments are even considering the idea of electronic personhood, saying he would have expected them to be a bit more “human chauvinistic”. That said, he is urging governments to not rush into this.
“Today’s robots and intelligent machines are almost certainly too simple to merit personhood by any reasonable standard,” Baum told Gizmodo. “Furthermore, there would probably need to be a different form of personhood for robots and intelligent machines, with different rights and responsibilities. In particular, the fact that they can be mass produced and replicated means we should be very careful about how we extend things like voting rights to them. Now is the time to debate these issues, not to make final decisions.”
“Robots should not be granted personhood now; there is no existing robot that remotely qualifies for person status,” Michael LaBossiere, a philosopher and expert in robot ethics at Florida A&M University, told Gizmodo. “However, we should work out the moral and legal issues now so as to try to avoid our usual approach of blundering into a mess and then staggering through it. So, I am in favour of laying the legal groundwork for the future of artificial persons.”
In terms of whether artificial persons can exist, LaBossiere says there’s no compelling reason to think that the mind must be strictly limited to organic beings. “After all, if a lump of organic goo can somehow think, then it is no more odd to think that a mass of circuitry or artificial goo could think,” he said. “For those who think a soul is required to think, it is also no more bizarre for a ghost to be in a metal shell than in a meat shell.”
In terms of telling when personhood status should be granted, LaBossiere said we should use the same tests we use to solve the problem of other minds when it comes to humans. “If an artificial being can pass the same language and behavioural tests as a human, it should get a presumption of status,” he said.
Sociologist and futurist James Hughes of the Institute for Ethics and Emerging Technologies agrees that robots may eventually deserve personhood status, but he’s worried that the language in the open letter rules this out as a possibility.
“The open letter is correct insofar as it suggests that current robots do not have moral standing and should not be considered capable of having rights,” Hughes told Gizmodo. “However they are wrong in rejecting the possibility of robots that could have moral standing and rights in the future.
“In fact, their argument is circular and nonsensical: Granting a robot human rights would violate human rights. If they mean by that the existing rights language is often human-racist (only humans can have rights) they are correct, just as racist laws in the past were unethical. If they mean that robot rights might conflict with the rights that humans exercise, that is true of all rights. Future robots may be able to be sufficiently human-like to be rights-holders, and when they are they should be granted rights.”
All of the experts we spoke to said it’s still way too early for the EU to be passing such laws, but there’s another consideration to think about – one hinted at by the authors of the open letter.
By willingly and knowingly granting personhood status to entities that aren’t actually persons, we’re both diminishing what it means to be a person and ignoring living entities who are truly deserving of personhood status, namely nonhuman animals such as whales, dolphins, elephants and other highly sapient creatures. (Disclosure: I am the founder and chair of the IEET’s Rights of Nonhuman Persons program.)
To be clear, and as LaBossiere pointed out, this doesn’t mean robots shouldn’t or won’t eventually qualify as bona fide persons. If they ever become self-aware, conscious agents, it would be hypocritical and unfair of us to deny them personhood status. But for now, we’re still far off from that critical point in our history.
The granting of personhood status to robots today may sound like a clever legal trick, but it’s actually intellectual laziness. When it comes to protecting humans and human property from robots and AI, we should come up with something more sensible. Something actually based in reality.