Alphabet CEO Sundar Pichai has called for AI regulation to help govern how the emerging technology is used in his first big public move since being appointed as the head of Google’s parent company last month.
Pichai shared his concerns regarding how and why AI should be regulated in a recent editorial in the Financial Times in which he wrote:
“Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it”
A sensible approach that balances “potential harms, especially in high-risk areas, with social opportunities” is Pichai’s recommendation and in his editorial, he also pointed to Europe’s GDPR as a “strong foundation” for AI regulation.
The case for regulation
While Pichai calls for new regulation, he makes the case that in certain areas, there are already guidelines in place. For instance, existing medical frameworks could serve as “good starting points” for devices such as AI-assisted heart monitors. Self-driving cars on the other hand, will require governments around the world to “establish appropriate new rules that consider all relevant costs and benefits”.
Leveraging AI for the greater good is very important to Alphabet’s CEO and he believes that letting the market decide how this technology will be used just isn’t good enough, writing:
“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”
In his editorial, Pichai also referenced Google’s AI Principles which were introduced in 2018 following criticism within the company over Google Cloud’s work with the US military. The principles have been applied across Google and they specify areas where the company “will not design or deploy” its technologies.
If used incorrectly, AI could have devastating effects on humanity which is why regulation will likely come sooner rather than later.