Google's AI Principles and the Battle for Humanity

Earlier this year, advocates and tech workers successfully lobbied Google to abandon a project with the Pentagon, code-named “Project Maven.” Google’s role in the project was to provide artificial intelligence (AI) that would analyze massive amounts of surveillance data for drones. I imagine if you did a survey of human reactions to enlisting artificial intelligence to figure out who to kill with a drone, the average person would check the box next to “dystopic hellscape.”

This seems to be the response of the many humans working over at Google, over 3,000 of which signed a letter opposing the contract, saying “Google should not be in the business of war.” In response to the controversy, Google’s CEO Sundar Pinchai penned a rather long list of “AI Principles” to provide some basic benchmarks for Google’s engagement in the sector and canceled the Project Maven contract.

Too bad nobody liked Pinchai’s principles. One author, writing for Bloomberg, noted that “[w]e're in a golden age for hollow corporate statements sold as high-minded ethical treatises.”Techcrunch found the principles “fuzzy,” and wrote that Google gave itself “considerable leeway with the liberal application of words like ‘appropriate’” throughout the principles.” One particularly colorful article on says “Pichai’s blog post is nothing more than thinly-veiled trifle aimed at technology journalists and other pundits in hopes we’ll fawn over the declarative statements like ‘Google won’t make weapons.’ Unfortunately there’s no substance to any of it.” Presumably this is because, as it is only a statement of principles, Google is not binding itself to any action in an enforceable way.

But let’s give Google some credit for taking a step here, even if, as the above authors suggest, they’ve been in the AI business way too long for a set of vague principles to impress us in 2018. That may be true, but Google now has an opportunity to put its money where it’s blog post is and make these principles (or a more specific, actionable version of them) a condition on all of the intellectual property it creates.

What I’m saying is that if Google is truly committed to ensuring that its technology is not a part of a dystopic future in which the AIs pick which humans deserve to die, it needs to make its ethical commitments legally binding. And it can.

By embedding human rights conditions in their IP licenses, Google can stop itself (including its future self) and others from using the technology it creates for evil.

This also works for any open source technology Google creates. Do they use the MIT or GPL license? Great, just add in a “morals clause” that stops any future user (licensee) from using the technology if they fail to comply with these terms.

There are different ways this can be done. At CAL, we have designed licenses for artists and freelance software developers that are geared toward stopping human rights abuses in supply chains. But this concept is highly portable. Google’s AI principles are probably too general to be used as licensing terms, except for the prohibition on using their tech in weapons development. But if Google made a list of concrete restrictions that uphold Pinchai’s principles, those could become terms of the license, requiring Google to live up to its promises. As an example, Google could restrict the use of its IP for use by certain government agencies or types of companies (ie no use by or in contract with the Pentagon, ICE, or defense contractors).

The time for watered down, voluntary corporate social responsibility has passed. We are over it. Companies, if you mean what you say, make it legally enforceable. Bind yourself to that promise, or go home.

Charity Ryerson is a co-founder and Legal Director for Corporate Accountability Lab.

Print Friendly and PDF