June 28, 2019 - Institute hosts event on Wiki AI: looking under the hood of applying ethics to machine learning

On Friday, the Tech Institute, along with the Center for Democracy & Technology, welcomed the the Wikimedia Foundation to host “Wiki AI: Looking Under the Hood of Applying Ethics to Machine Learning”.

The workshop provided civil society and congressional staff an in-depth, open forum to discuss new algorithmic tools developed by Wikimedia to assist with governance and moderation on its web platforms. The tools help editors flag troubling content, automatically translate existing Wiki pages, assist humans with edits, and automate other functions. The dialogue with Wikimedia’s technical, research, and policy staff furthered the ongoing conversation about best practices for developing and implementing new AI tools, the potential of new automated processes to reinforce existing bias, and provided new insight into what policymakers can learn from Wikimedia’s experience to inform the broader discussion around the responsible development and deployment of artificial intelligence.

Representatives from Wikimedia provided insight into the values of the Foundation and how engineers strive to develop new tools in line with Wikimedia’s commitment to openness and transparency. Key topics included how Wikimedia balances the accuracy of its content against concerns of entrenching bias through automated editing, ensuring that Wikimedia remains a human-centered platform while also enabling new workflows, and deploying tools in a way that respects the existing community while also making editing and moderation a more inclusive process. 

Throughout the morning, panelists especially highlighted four major themes:

  • Research in the Service of Free Knowledge: Wikimedia increasingly serves as a global knowledge repository. They are researching their knowledge gaps and the integrity of that knowledge to effectively develop technology that supports the world’s free knowledge infrastructure.

  • Developing Ethical AI: How can Wikimedia’s developers incorporate ethical guidelines into the nuts-and-bolts practice of product design?

  • Transparency Isn’t Enough: Wikimedia’s machine learning tools must balance quality control and efficiency with fairness. How the Wikimedia community of contributors interacts with these tools illuminates the power of auditing machine learning tools and moving from transparency to empowerment and participation. 

  • Treasure In; Treasure Out: Wikimedia’s intentional approach to design and user data takes lessons from human-to-human processes of building trust to strive for immersive feedback and equitable value exchange between users and AI systems.

Thanks to our partners at the Wikimedia Foundation and the Center for Democracy & Technology as well as Georgetown Law’s Amanda Levendowski for collaborating with us on this event. More information about Wikimedia’s use of AI on its platform is available at https://www.mediawiki.org/wiki/Wikimedia_Product/Perspectives/Augmentation