It's Time to Start Applying Lessons Learned from Cyberlaw to Robotics and AI

Alexander Sakhanyuk
Georgetown Law, Class of 2017

Robots and self-aware computers have long held a prominent place in science fiction. Ours may be the era in which these science fiction tropes become reality. Legal scholars such as Ryan Calo, Matthew Scherer, and Jack Balkin are busy at work preparing for this future. In predicting what the robotics and artificial intelligence (“AI”) revolutions might bring, they look to the history of another transformative technology: the Internet.

The history of legal issues raised by the Internet is a useful starting point. For example, just as the Internet presented problems of jurisdiction by enabling interstate crime without regard to physical boundaries, so do robotics and artificial intelligence. For instance, a robot in one state could be used to injure a resident by a human controlling the robot from another state.

In addition, both sets of technologies present similar questions of choice of law and enforcement. If a U.S.-made AI writing pulp fiction in China violates censorship laws, would a judgement against the company who created the AI be enforceable in U.S. courts?  In Yahoo! Inc. v. La Ligue Contre Le Racisme et L’Antisemitisme, 433 F.3d 1199 (9th Cir. 2006), a similar issue splintered the court. In that case, two French organizations obtained interim orders from a French court to have Yahoo! stop hosting hate speech on Yahoo!’s French website. Yahoo! sought and obtained from the district court a declaratory judgement that the orders were unenforceable. The court of appeals reversed, holding that the issue was unripe for decision. A dissent warned against allowing other countries to police U.S.-based speech. “Are we to assume that U.S.-based Internet service providers are now the policing agencies for whatever content another country wants to keep from those within its territorial borders—such as, for example, controversial views on democracy, religion or the status of women?” Id. at 1252. Similar issues will be raised by robots and AI.

Experience with the Internet also suggests that regulation of AI expression would intersect with human First Amendment rights. An AI’s programming is the speech of its designers and what it says is determined by its makers. Makers of author-AI will determine what content the AI accesses, how the content is processed, and what the AI will write, say, or sing. The final product would be an expression of the human behind the AI, much the same way that that a photograph is an expression of the photographer, not the camera. Thus, AI-speech would be human speech deserving of First Amendment protection.

Legal scholars should already be addressing AI First Amendment issues. The foundation for author-AI technology is already in place. Websites like Springhole have plot-generators. Online encyclopedias like TVTropes identify common characters, plots, and settings in popular fiction, then tag these elements specific works. Combining such applications with AI could result in software capable of writing novels. Amateur fiction-hosting sites such as Archiveofourown could become ‘classrooms’ where AI publish stories and learn from reader reactions. A similar approach could be used in other media, resulting in AI-made radio programs or songs.

How should author-AI speech be protected? Since AI speech is really human expression, it should receive the same protections as human speech. However, as Toni Massaro and Helen Norton noted in Siri-ously? Free Speech Rights and Artificial Intelligence, strict treatment of AI speech as human speech could have the unintended result of making AI speech more protected than that of people. The solution is to qualify such protection on the human creator being locatable and on a determination that the creator intended the AI speech. In a dispute, if the creator asserts his or her First Amendment rights and the government disputes that the AI speech was intended by the creator, the burden should be on the government to prove that the AI generated the speech in error or that it was otherwise not intended by the creator. The burden can shift to the creator depending on the ‘strength’ of the AI. If the AI has relative freedom and high intelligence, its speech should be treated as its own and should not receive First Amendment protection.

As the works of various legal scholars show, the questions likely to be raised by advancements in robotics and AI can be anticipated. It’s important that we start answering those questions now so that we can enjoy the fruits of these technologies without the legal uncertainties they now present.