Georgetown Law, Class of 2017
For anyone with a passing familiarity with popular science fiction, the impending dawn of robotics and artificial intelligence (AI) probably brings a mixture of hope and dread. On the positive side, these overlapping technologies promise to transform society, revolutionizing everyday activities and creating unimagined opportunities. The possibilities are endless: automated vehicles promise to reinvent the daily commute as a period of productivity or relaxation, while simultaneously effecting a massive reduction in traffic accidents; self-piloting drone delivery services could reduce shipping costs and drastically increase shipping speed, bringing products in a matter of hours rather than days, further diminishing the need for physical retail spaces; surgery performed by medical robots could lower costs, improve quality of care, and solve problems of access to treatment through mass production and wide-scale deployment. And these are only a few of the applications currently in development.
But for every societal breakthrough, there is potential for incredible harm. Renowned physicist Stephen Hawking recently told the BBC, “The development of full artificial intelligence could spell the end of the human race.” Entrepreneur and inventor Elon Musk shares similar concerns, describing the development of AI as mankind’s “biggest existential threat,” while calling for regulatory oversight. These fears echo what we have seen in film, television, and fiction for decades: hyper-intelligent machines attain something resembling consciousness, decide humans are a threat (or a nuisance), and turn their cold, uncaring logic towards the problem of exterminating mankind. But even if sentient machines may or may not be a valid concern, to the extent robotics and AI have the capacity to affect the physical world and the potential to behave unpredictably, there will be a tangible threat of physical harm to humans. See Ryan Calo, Robotics & the Lessons of Cyberlaw. Beyond just physical harm, unpredictable AI behavior can also lead to pure economic harm, as with the “Flash Crash of 2010,” where the interaction of algorithmic trading programs led to a trillion-dollar stock market crash lasting only 36 minutes.
When the potential risks of a given technology stretch from trillion-dollar losses to physical injuries on a mass scale (not to mention the background threat of human extinction), there is a compelling argument for regulation. See Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. The strongest argument against regulating a nascent industry is typically the fear of stifling innovation—in the hopes of avoiding unnecessary regulatory hurdles, businesses often push for self-regulatory approaches, such as the aspirational but vaguely defined “privacy by design” movement. What distinguishes AI, however, is the significant push for regulation from within the industry. In the event of inevitable tort harms, AI presents challenging questions of liability—it makes sense that businesses might want legal clarity by way of government regulations (and they might want to help develop those regulations, naturally). This could help businesses assess the liability exposure of their AI and robotics systems before undertaking riskier deployments.
Because of the extreme technical complexity of these systems, a purpose-specific administrative agency probably offers the best opportunity to develop the expertise necessary to implement a thoughtful legal framework. Ideally, that framework will recognize the array of potential harms, identify a reasonable scheme of liability, and put in place regulations that still allow for innovation and competition. Attorney Matthew Scherer proposes a novel approach: he would create an agency tasked with “ensur[ing] that AI is safe, secure, susceptible to human control, and aligned with human interests, both by deterring the creation of AI that lack those features and by encouraging the development of beneficial AI that include those features.” This theoretical agency would be responsible for developing policy and operating a certification program for AI developers, manufacturers, and operators; under this scheme, companies that obtain AI certification enjoy limited tort liability, whereas uncertified AI-related companies are subject to strict liability. The risks of strict liability and the inevitable costs of certification might prevent small startups from entering the market while favoring larger companies like Google, which can absorb the expense of either approach. But Scherer’s scheme otherwise strikes a fair balance between incentivizing safety and shepherding innovation. Any number of regulatory schemes could work, but the government should assert control over the future sooner rather than later—before it’s too late.