DeepMind launches AI Ethics unit
By Staff Writer 5 October 2017 | Categories: newsWhile an increasing number of companies are using “artificial intelligence” as part of their sales pitch, the big boys, such as Google’s DeepMind (the folks responsible for the AlphaGo AI champ), are considering the challenges and ethical problems that have emerged with the growth of AI. DeepMind announced in a blog post the establishment of DeepMind Ethics & Society, eloquently explaining why this research unit was necessary.
“As history attests, technological innovation in itself is no guarantee of broader social progress. The development of AI creates important and complex questions. Its impact on society—and on all our lives—is not something that should be left to chance.” It goes on to state that, since the field of AI is so large, it becomes difficult to manage positive outcomes and that it’s necessary to understand the wider impact of DeepMind’s conduct.
Some, like Professor Nick Bostrom of Oxford University, might say that the establishment of the DeepMind Ethics & Society was long overdue. His 2014 book, Superintelligence, warned of the dangers of AI if surpassing human intelligence. Luckily then that he is one of the research fellows on the unit, alongside:
● Professor Diane Coyle: Professor of Economics and Co-Director of Policy@Manchester
● Professor Edward W. Felten: Professor of Computer Science and Public Affairs, Founding Director of Princeton's Center for Information Technology Policy
● Christiana Figueres: Leader on global climate change, convener of Mission 2020
● James Manyika: Senior Partner at McKinsey and Chair of the McKinsey Global Institute
● Professor Jeffrey D. Sachs: Professor of Economics, Director of the Center for Sustainable Development at Columbia University and senior UN advisor
The DeepMind Ethics & Society Fellows are independent advisors who help to “provide oversight, critical feedback and guidance for its research strategy and work programme.”
Some of the key ethical challenges that the research unit has identified and which will be addressed include:
● The economic impact of AI and ensuring inclusion and equality
● Concerns regarding privacy, transparency and fairness
● How to manage AI risk
● the role of AI in addressing international challenges and concerns such as global warming
● How morality and values can be incorporated in AI to align it with ethical norms.
Alongside the research fellows, DeepMind has developed a set of five core principles. They include:
1. Social benefit
“We believe AI should be developed in ways that serve the global social and environmental good, helping to build fairer and more equal societies.”
2. Rigorous and evidence-based
“Our technical research has long conformed to the highest academic standards, and we’re committed to maintaining these standards when studying the impact of AI on society.”
3. Transparent and open
“We will always be open about who we work with and what projects we fund. All of our research grants will be unrestricted and we will never attempt to influence or pre-determine the outcome of studies we commission.”
4. Diverse and interdisciplinary
“We will strive to involve the broadest possible range of voices in our work, bringing different disciplines together so as to include diverse viewpoints.”
5. Collaborative and inclusive
“We believe a technology that has the potential to impact all of society must be shaped by and accountable to all of society.”
Will the DeepMind Ethics & Society ensure a beneficial and responsible AI for all, especially keeping in mind that one day artificial intelligence might surpass human general intelligence? Better that it gets addressed sooner instead of later.
Image: Shutterstock
Most Read Articles
Have Your Say
What new tech or developments are you most anticipating this year?