Democratizing AI (Part I) - How to ensure human autonomy over our computational “screens, scenes, and unseens.”

by Richard Whitt, Senior Fellow

Digital assistants such as Alexa and Siri and Google Assistant can be quite helpful — but their actual allegiance is to Amazon and Apple and Google, not to the ordinary people who use them. By introducing AI-based digital agents that truly represent and advocate for us as individuals, rather than corporate or government institutions, we can make the Web a more trustworthy and accountable place.

In the 2004 film “I Robot,” Will Smith’s character, the enigmatic Detective Del Spooner, harbors an animosity toward the humanoid-like robots operating in his society. Over the course of the film we learn why. While Spooner once was driving his car on a rainy street, a crash sent his vehicle and another careening into a torrential river. A rescue robot was deployed to pull Spooner and his car to safety, but Spooner implored it to instead retrieve a young girl still alive in the other car.

The rescue robot doesn’t listen. It turns out this artificial intelligence was programmed to rescue humans with the best chances of survival — and since Spooner’s odds were deemed better (45 percent to 11 percent), he was the one chosen to be saved. The 12-year-old girl perishes. As Spooner bitterly put it later, “Eleven percent is more than enough. Human being would have known that.”

Parts of “I, Robot” remain sci-fi lore, like AI first responders with physical bodies. But the ubiquity of AI is no longer fiction. Today, artificial intelligence powers search engines, social media platforms, smart speakers, drones, and much more. The ethical conundra presented in “I, Robot” no longer are fiction, either: AI algorithms now do everything from curating journalism, to diagnosing our health, to determining who gets a loan, or a job, or parole. Think of these AIs as incredibly advanced, and hugely impactful, selection engines.

Ceding any kind of human decision-making to machines is ethically complex territory. This is especially the case when the machines are proverbial “black boxes,” allowing little transparency into the ways they are programmed. Who gets to decide who designs and builds the algorithms, and what data they’re trained with, are deep questions without easy answers. And today, there’s a complication making this complex realm even more problematic: The AIs embedded in and shaping our everyday lives typically are beholden to the priorities and control of large institutions — not to the “end user.”

Device-embedded AIs like Alexa and Siri can be useful, and even delightful. They’re impressive feats of engineering, too. But it’s important to recognize that their real allegiance is to Amazon and Google, not the individual person. As a result, the tech giants of Silicon Valley and elsewhere have their virtual agents perched in our living rooms, and embedded within our phones and laptops, constantly vying for our attention, our data, and our money.

We already can see the consequences of this dynamic, from privacy breaches to technology addiction to the spread of misinformation. As AIs become more advanced, and make more decisions for us and about us, what will these problems look like on a larger scale? And is there a way to prevent them?

Read the rest at