On March 26, Google said it was launching a global advisory council, the Advanced Technology External Advisory Council (ATEAC), to tackle ethical issues around artificial intelligence and other emerging technologies. The eight-member council will meet four times in 2019 starting this month and publish a report at the end of the year, the company said. “We hope this effort will inform both our own work and the broader technology sector,” Google’s post read.

Who is on Google’s AI council?

The council comprises technology experts, digital ethicists, and people with public policy backgrounds, drawn from the corporate world, academia and government. They are:

  • Alessandro Acquisti, a leading behavioral economist and privacy researcher.
  • Bubacarr Bah, an expert in applied and computational mathematics.
  • De Kai, a leading researcher in natural language processing, music technology and machine learning.
  • Dyan Gibbens, an expert in industrial engineering and unmanned systems.
  • Joanna Bryson, an expert in psychology and AI, and a longtime leader in AI ethics.
  • Kay Coles James, a public policy expert.
  • Luciano Floridi, a leading philosopher and expert in digital ethics.
  • William Joseph Burns, a foreign policy expert and diplomat.

Google said the council will consider some of the most complex challenges that arise under its AI Principles, which it announced last June, such as facial recognition and fairness in machine learning.

In four days, AI council explodes into controversy

But on Saturday, just four days after the announcement, Acquisti tweeted that he had declined Google’s invitation to join the council as he didn’t believe it was the right forum for him to engage in this “important work”.

On Monday, Bloomberg reported that a group of employees had started a petition demanding that the company remove another member: Kay Coles James, president of the conservative think tank Heritage Foundation, who has fought against equal-rights laws for gay and transgender people. More than 500 Google employees had signed the petition anonymously by late Monday morning local time, according to Bloomberg.

What are Google’s AI Principles?

In June 2018, Google wrote a blog post laying out its AI Principles. Here is a summary.

Google believes AI should:

1. Be socially beneficial
We will take into account a broad range of social and economic factors, and will proceed where we believe the benefits substantially exceed the downsides.

We will strive to make high-quality information available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate.

2. Avoid creating or reinforcing unfair bias

We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

3. Be built and tested for safety

We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research.

4. Be accountable to people

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

5. Incorporate privacy design principles

We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

6. Uphold high standards of scientific excellence

AI tools have the potential to unlock new realms of knowledge in biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI.

We will work with a range of stakeholders, and publish educational materials, best practices, and research that enable more people to develop useful AI applications.

7. Be made available for uses that accord with these principles.

We will work to limit potentially harmful or abusive AI applications and evaluate likely uses on the following factors:

  • Primary purpose and use: Including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scale: whether the use of this technology will have significant impact
  • Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI applications Google will not pursue

While laying out its AI Principles, Google also specified what it would not use AI for:

  • Technologies that cause or are likely to cause overall material harm
  • Weapons or other technologies
  • Technologies that gather or use information for surveillance
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights