The company responsible for AlphaGo — the first AI program to defeat a grandmaster at Go — has launched an ethics group to oversee the responsible development of artificial intelligence. It’s a smooth PR move given recent concerns about super-smart technology, but Google, who owns DeepMind, will need to support and listen to its new group if it truly wants to build safe AI.
Image: AP
The new group, called DeepMind Ethics & Society, is a new research unit that will advise DeepMind scientists and developers as they work to develop increasingly capable and powerful AI. The group has been entrusted with two primary aims: Helping AI developers put ethics into practice (for example, maintaining transparency, accountability, inclusiveness and so on), and to educate society about the potential impacts of AI, both good and bad.
“Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work,” states an introductory post at DeepMind. “As history attests, technological innovation in itself is no guarantee of broader social progress. The development of AI creates important and complex questions. Its impact on society — and on all our lives — is not something that should be left to chance. Beneficial outcomes and protections against harms must be actively fought for and built-in from the beginning. But in a field as complex as AI, this is easier said than done.”
Indeed, we’re quickly heading into uncharted territory. Unchecked, laissez faire development of AI could lead to any number of undesirable social outcomes, from bots that acquire biases against race, gender and sexual orientation, through to poorly-programmed machines prone to catastrophic errors. Accordingly, the new DeepMind ethics group says that AI applications “should remain under meaningful human control” and be used for “socially beneficial purposes”.
To that end, the group has set up a list of five core principles; DeepMind scientists and developers need to ensure that AI is good for society, evidence-based, transparent and open, diverse and interdisciplinary, and collaborative. It has also listed several key ethical challenges, such as mitigating economic impacts, managing AI risk, agreeing on AI morality and values, and so on. An advisory group of fellows has also been established, including such thinkers and experts as Oxford University philosopher Nick Bostrom, University of Manchester economist Diane Coyle, Princeton University computer scientist Edward W. Felten, and Mission 2020 convener Christiana Figueres, among others.
This is all very nice, of course, and even well-intentioned, but what matters now is what happens next.
When Google acquired DeepMind in 2014, it promised to set up a group called the AI Ethics Board — but it’s not immediately apparent what this group has done in the three-and-a-half years since the acquisition. As The Guardian points out, “It remains a mystery who is on [the AI Ethics Board], what they discuss, or even whether it has officially met.” Hopefully the DeepMind Ethics & Society Group will get off to a better start and actually do something meaningful.
Should this happen, however, the ethics group may offer certain tidbits of advice that the DeepMind/Google overlords won’t appreciate. For example, the ethics board could advise against using AI-driven applications in areas that Google deems to be potentially profitable, or recommend constraints on AI that severely limit the scope and future potential of its products.
These sorts of ethics groups are popping up all over the place right now (for example, Elon Musk’s OpenAI), but it’s all just a preview to the inevitable: Government intervention. Once AI reaches the stage where it truly becomes a threat to society, and examples of harm become impossible to ignore, the government will need to step in and start exerting regulations and control.
When it comes to AI, we’re very much in the Wild West phase — but that will eventually come to an abrupt end.