The development of AI is creating new opportunities to improve the lives of people around the world, from business to healthcare to education. It is also raising new questions about the best way to build fairness, interpretability, privacy, and security into these systems.
Googler recently updated its AI Principles, a document created in July 2019 as a guide the ethical development and use of AI in our research and products. At the end of 2018, a new document entitled Responsible AI Practices set out a list of technical recommendations to be updated quarterly and giving results to share with the wider AI ecosystem.
To encourage teams throughout Google to consider how and whether the company’s AI Principles affect their projects, several projects were launched:
• Trainings based on the “Ethics in Technology Practice” project developed at the Markkula Center for Applied Ethics at Santa Clara University, with additional materials tailored to the AI Principles. The content is designed to help technical and non-technical Googlers address the multifaceted ethical issues that arise in their work.
• AI Ethics Speaker Series with external experts across different countries, regions, and professional disciplines. So far, we’ve had eight sessions with 11 speakers, covering topics from bias in natural language processing (NLP) to the use of AI in criminal justice.
“Along with these efforts to engage Googlers, we’ve established a formal review structure to assess new projects, products and deals”, writes Kent Walker, Senior VP of Global Affairs at Google, in a blog post. Thoughtful decisions, he maintains, require a careful and nuanced consideration of how the AI Principles (which are intentionally high-level to allow flexibility as technology and circumstances evolve) should apply, how to make tradeoffs when principles come into conflict, and how to mitigate risks for a given circumstance.
“The variety and scope of the cases considered so far are helping us build a framework for scaling this process across Google products and technologies”, he writes. This framework is intended to include the creation of an external advisory group, comprised of experts from a variety of disciplines, to complement the internal governance and processes.