IP Osgoode

The Partnership on AI: A Modern Manhattan Project?

On June 29, Sam Harris delivered a TED Talk in which he posed the question: “can we build artificial intelligence without losing control of it?” He proposed the founding of “something like a Manhattan project on the topic of artificial intelligence” to answer his question. On September 28, leading Silicon Valley AI developers entered into a “Partnership on AI to benefit people and society”. Is this the answer Harris hoped for?

What is the “Partnership on AI”, and who are the Partners?

The “Partnership on AI” is a not-for-profit platform to support best practices in the development of Artificial Intelligence. Google, Amazon, IBM, and Microsoft are the founding partners. These companies are industry leaders in the development of artificial intelligence, drones, and enterprise technologies.

IBM’s Watson AI made headlines in recent years for its ability to research and compile relevant information at super-human speeds. Watson has the potential to fundamentally change the nature of industries reliant on intelligent research. DeepMind, Google’s AI development office, also recently made headlines when its “learning” AI was able to beat world champions at the ancient logic game Go. The scale of processing needed to calculate moves in Go is astronomically greater than that in chess, marking a distinct shift in the capabilities of computing since IBM’s DeepBlue defeated world chess champion Gary Kasparov in 1997.

Why should we be concerned about AI?

These computers are examples of how computing is already capable of information processing exceeding that of humans, in some areas. Sam Harris’ TED Talk argued “if intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence.” At the same time, he argued, we have so little understanding of how to constrain such an intelligence and “we have no idea how long it will take us” to determine that.

We should be afraid of this paradigm. Artificial intelligence, if incorrectly implemented, could have disastrous consequences to human society and the global economy. The extreme example Harris offered was that “a few trillionaires”, benefitting from the exponentially improved productivity of AI, “could grace the covers of our business magazines while the rest of the world would be free to starve”, as the result of AI eroding jobs and networks of economic exchange. The fear in this example is not that artificial intelligence would become malevolent—as so much science fiction has proposed it may—but, instead, that it would be so much more intelligent and capable than humans, and, by relative measure,  intellectually, we would be to it what ants are to us.

What does the Partnership propose to do about this?

The mission statement and tenets of the Partnership on AI respond to some of Harris’ concerns. The organization states its mission is to ensure the maintenance of, “ethics, fairness, inclusivity, transparency and interoperability, and privacy” in the development of artificial intelligence.

The organization intends to bring together experts from a broad range of fields to respond to the implications of AI in relation to economics, social science, finance, public policy, and law.

The organization’s tenets include: “to ensure that AI technologies benefit and empower as many people as possible”; “maximize the benefits and address the potential challenges of AI technologies”; and, “working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society”—these suggest that this organization understands and empathizes with the concerns of Harris and others, related to AI.

What does this mean?

It remains to be seen if this organization and the oversights it vows to provide will prove sufficient to mitigate the potential threats and issues raised by Harris. Concerns are already being raised related to the absence of Apple and Elon Musk (of OpenAI, SpaceX, Tesla) from the agreement.

Apple’s Siri personal assistant and Tesla Motors’ self-driving cars are two of the highest-profile artificial intelligence applications on the market. Both companies stand poised to play a major role in the development of AI. It remains possible that these companies could join the “Partnership”, however, both Apple and Musk are known for their history of independence in the tech market. If these developers choose to remain independent, this could seriously undermine the authority of the “Partnership” and affect the ability for the AI development ‘industry’ to self-regulate.

It is also worthwhile to consider that the “Partnership” is rooted only in American businesses, which presents problems insofar that it does not adequately account for the emergence of new AI developers in countries outside of the United States – China, or India, for example. As well, in an extreme case, the centralization of such AI development singularly in the United States could contribute to Cold War-esque tensions, which Harris warned his audience during his talk.

The Manhattan Project for AI?

Harris’ Manhattan Project analogy is significant. The Manhattan Project brought together many of the world’s greatest scientists and mathematicians to construct the atomic bomb, all with the purpose of ensuring that power did not fall in to the wrong hands – Nazi Germany – during the Second World War. For its intents and purposes, the project succeeded. The bomb was built and it was used to end the war. However, as history proved, despite the positive intentions of the project, it ultimately contributed to further evils as the impetus for the beginning of the Cold War. Albert Einstein, who contributed to the project indirectly, later regretted the creation of the device.

If AI were to go the way of the atomic bomb, that is, result in disastrous consequences despite our best efforts to regulate it, this author believes that fact should be cause for concern. While the functionality of AI remains in question as developers continue to seek greater and greater cognition from their machines, this may be, as Harris argued, a critical point in our history.

 

Christopher McGoey is an IPilogue Editor and a JD Candidate at Osgoode Hall Law School.

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content