Tarleton Gillespie is Principal Investigator at Microsoft Research, New England and Associate Associate Professor in the Department of Communication at Cornell University, USA. He completed his MA (1997) and Ph.D. (2002) in Communication from the University of California, San Diego. His investigation focuses on the controversial relationships between digital media and their suppliers. His most recent book is entitled Custodians of the Internet: Platforms, Content Chair, and the Hidden Decisions that Shape Social Media (Yale 2018).
The public debate over content Chair has focused on removal: social media platforms delete content and suspend users, or choose not to. But removal is not the only remedy available. Reducing the visibility of problematic content is becoming a common part of platform governance. These platforms use machine-learning classifiers to identify sufficiently misleading, harmful and offensive content that, although they do not justify removal in accordance with the site's guidelines, they justify the reduction of its visibility through its demobilization in algorithmic classifications and recommendations, or its total exclusion. This conversation reflects on this shift and explains how reduction works.