Through the years, artificial intelligence has progressed from being a futuristic application in secretive scientific projects to a technology that shapes almost all aspects of modern life. When it first arrived, there were intense discussions on the possibility of AI running amuck, relegating humans to becoming the slaves of AI-run machines. More than half a century of formal research since then, punctuated by major advancements, has established the benefits of AI and made it a defining technology for the future. However, as the applications of AI increase rapidly, the scientific community, as well as lawmakers, are concerned that unregulated AI can lead to misuse and abuse.

In this context, the European Commission took a clear stance by releasing a white paper in February 2020, which could lead to a regulatory framework for AI. That was followed up later in the year by the European Parliament, which adopted a set of proposals on how the EU can best regulate AI to boost innovation, ethical standards, and trust in technology. More recently, scientists convened at a virtual platform called the ‘Governance Of and By Digital Technology’ to deliberate on the principles required to regulate current and future digital technologies and prevent the harmful impact of decision-making algorithms. The event was hosted by EPFL’s International Risk Governance Center (IRGC) and the European Union’s Horizon 2020 TRIGGER Project.

Among the speakers at the conference were EcoCloud members Bryan Ford and James Larus. Professor Ford is Associate Professor at EPFL and head of the Decentralized and Distributed Systems Laboratory (DEDIS) in the School of Communication and Computer Sciences, while Professor Larus is Dean of the IC School and IRGC Academic Director.

Professor Ford called for a “cautious use” of powerful AI technologies but warned against their implementation in defining, implementing, or enforcing the public policy. Instead, policymaking should remain an exclusively human domain. Citing a real-world example, he said, “AI may have many justifiable uses in electric sensors to detect the presence of a car—how fast it is going or whether it stopped at an intersection—but I would claim AI does not belong anywhere near the policy decision of whether a car’s driver warrants suspicion and should be stopped by Highway Patrol.”

Professor Larus emphasized that AI applications have benefits as well as risks, and they must never be allowed to cross the “thin red line” between the two. Regulations are needed to maintain that balance between benefits and risks.

Other speakers at the conference included Stuart Russell, Professor of Computer Science at the University of California, Berkeley. He drew attention to the already evident risks from poorly designed AI systems, which include online misinformation, impersonation, and deception. Marie-Valentine Florin, Executive Director of the IRGC, reminded participants that artificial intelligence is not an end in itself but only a means to an end.


https://actu.epfl.ch/news/crossing-the-artificial-intelligence-thin-red-line/