← Articles

The importance of ethics in the governance of artificial intelligence

Artificial intelligence and machine learning techniques have the potential to do a lot of good for the world. Extending beyond the achievements of DeepMind to beat the world's Go Champion in March last year, it is currently playing a role in improving the fairness of the insurance industry through companies like Lemonade, improving primary health care treatment through companies like Remedy, and making it possible to live in a future where we have self-driving cars, smart AI assistants, and highly-detailed personalised education for our kids. It is also being used in ways we don't necessarily notice or understand; curating our news feeds on Facebook, suggesting new music to us on Spotify, and profiling us for crimes we have yet to commit. On the flip side, AI researchers have been discussing the threat of the singularity, where AI could surpass humans as the most intellectually sophisticated entities on the planet. Regardless of its final applications, it is critical to encourage the union of the worlds of the computer scientists and machine learning experts and the humanities; specifically lawmakers, sociologists, psychologists, economists, philosophers, anthropologists, ethicists, and more. There needs to exist an interdisciplinary approach to creating regulatory frameworks so that we can ensure AI is leveraged to benefit humanity, rather than as a means of control.

Lucky for us, there's a bunch of brilliant people working on it

OpenAI was founded on the principle that AI should be advanced in a way to benefit humanity as a whole unconstrained by the need to generate financial return. The Ethics and Governance of Artificial Intelligence Fund is an attempt by the Knight Foundation, Reid Hoffman, Pierre Omidyar, the MIT Media Lab, and the Berkman Klein Center amongst others, to encourage transparent, cross-disciplinary research into how to best manage AI as well as understand its broad effects on humanity. Stanford is conducting a One Hundred Year Study on Artificial Intelligence (AI100), which is a long-term investigation of the field of AI and its influences on people, their communities, and society. AINow published a comprehensive report on the social and economic implications of artificial intelligence technologies in the near-term focused on the themes of healthcare, labour, inequality, and ethics. And there's more, which the Berkman Klein Center has compiled into a handy list on their website here.

But what are some of the main issues facing AI, and what role should academia and other institutions play in guiding a beneficial future?

Julia Bossmann, who is the President of the Foresight Institute, believes there are 9 top ethical issues in AI:

  1. Unemployment at 'the end of jobs'. Self-driving cars for example could put millions of truckers out of work, but could lower the risk of automobile related accidents. Automation could also mean people are able to work less hours so that they have more time to spend with their families and engage with their local communities. Others argue that AI technologies will lead to 'mass redeployment' much in the same way that the industrial revolution lead to a shift from agricultural living to cities. However, automation is likely to create new roles for highly skilled workers and not low skilled workers, so some are arguing for a universal basic income to ensure the livelihoods of displaced workers.
  2. Inequality and how we distribute wealth created by machines. Our economy is currently centered around compensating people for their time and contribution to the economy (broadly speaking). With automation, there'll be less of a need for a traditional human workforce and revenues will go to fewer people, so it's important to think about how we can ensure the benefits of AI are spread to all of humanity.
  3. Humanity and how machines affect our behaviour and interaction. AI could be used to nudge people towards more beneficial behaviour, but could also be used to manipuate people. How our kids interact with human-like AI could also affect their development.
  4. Artificial stupidity and how we can guard against mistakes. It's critical to make sure that the machines perform as intended and aren't able to be manipulated for people's own gain.
  5. Racist robots and how we eliminate AI bias. AI systems are created by humans who can be biased and judgmental, so it's important to avoid algorithms that behave in detrimental ways e.g. racially profiling when predicting future criminals.
  6. Security and how we keep AI safe from adversaries. Cybersecurity wars will escalate should AI get into the hands of people with malicious intent.
  7. Evil genies and how we protected against unintended consequences. AI is only as good as the data it is given, so it is important to inject human judgment into the results it returns e.g. avoiding solutions where we eradicate poverty by killing all poor people.
  8. Singularity and how we stay in control of a complex intelligent system. As human evolution and dominance of the planet stems from being smarter than other animals despite our inferior physical prowess, it will be vital to manage AI if it becomes smarter than human beings. DeepMind are in the process of developing a 'kill switch' so an advanced form of AI will be unable to avoid being shut down.
  9. Robot rights and how we define the humane treatment of AI. Consideration must be paid to how to treat AI legally when machines become able to perceive, feel, and act, much in the same way that animals have rights.

Urs Gasser, who is the Executive Director of the Berkman Klein Center, sees there being 5 roles that universities will play when it comes to the ethics and governance of AI:

  1. Supplying open resources for research, development, and deployment of AI systems, particularly those in public interest and for social good. Commercial and nation-state interests in the deployment of AI will likely mean that it won't be open forever, so it is important for universities to ensure access to AI resources and infrastructure over time e.g. computing resources, data sets that play a strategic role in ML.
  2. Access and accountability. Universities have the capability to act as independent/public interest-oriented institutions that develop a means of measuring and assessing AI systems' accuracy and fairness. We need new methodologies to understand the black box of how these algorithms are performing, as it is sometimes the case that it is hard for even the creator of the algorithm to determine what the decision-making process of the algorithm is.
  3. Social and economic impact analysis. Universities can establish methodologies and determine suitable review and impact measurement factors. It's important to understand what these technologies are doing to society, and how we can ensure that our knowledge base survives and expands over time.
  4. Engagement and inclusion. Universities can bring together various AI stakeholders who may otherwise not have been willing to engage in dialogue, because they'd be in direct competition with one another. BKC has also discussed developing an inclusion lab, which would explore ways in which AI systems can be designed and deployed to support efforts aimed at creating a more diverse and inclusive digitally networked society.
  5. Translator. Universities will be able to act as a translator by communicating the implications, opportunities, and risk of AI from the relatively small group of experts who understand the technology to the public at large.

He concludes by emphasising the importance of closing the divide between engineers and computer scientists and the humanities, social scientists, policymakers, and ethicists. He also underscores the role that universities will play in developing AI for public good:

From the perspective of the university, the wave of AI that has washed over the globe has sparked great opportunities. More importantly, technological developments have underscored the responsibilities and indeed, idiosyncrasies, that endow universities with the unique ability to act as providers, conveners, translators, and integrators, to leverage artificial intelligence in the public interest and for the greater good.”

The Berkman Klein Center and MIT Media Lab have also jointly created a video series about the ethics and governance of AI which can be found here. Topics range from how we should ethically design AI systems that complement humanity, how AI could pose threats to civil liberties & democracy, pose developmental challenges for our kids, be injected into education and personalised learning, and how the development of AI will need to be open and have oversight.

There's a lot of work still yet to be done, and opening the dialogue between different fields of researchers, industry, and the government is a necessary step in the right direction.