
PhD Candidate: Hussam Habib
Abstract
Online platforms have fundamentally transformed how we access information. This shift in our information-exposure patterns has often been observed to cause undesirable societal outcomes, such as the proliferation of misinformation and political polarization. A significant factor that shapes what content users see is the configuration of a platform’s recommendation algorithm. However, due to a limited understanding of algorithm configurations and how they may manifest as downstream societal outcomes, policymakers are limited in their ability to write meaningful regulations.
In my thesis, I evaluate social-media platforms through a causal counterfactual framework to audit the algorithmic configurations of platforms. I investigate three modern platforms—Reddit, X, and YouTube—to answer two key questions: 1) how are user interactions interpreted as signals by the platform, and 2) what meaningful curation patterns emerge? Through an audit using 72 automated bots for each platform, we demonstrate how a platform's underlying business model is translated into algorithmic parameters. For instance, an entertainment-focused platform, such as YouTube, prioritizes exploitation (i.e., reinforcement) of user's revealed preferences learned through implicit signals such as watch time, demonstrating a 43% increase in a topic's presence in the home feed following a single click on a video. Compare this with X that prioritizes exploration of preferences, with over 70% of its entire homepage feed curated with topics the user has not demonstrated implicit or explicit interest. These findings empower policymakers to regulate platform behavior at the algorithmic level, rather than relying on reactive measures that limit user expression post-hoc.
Advisor: Rishab Nithyanand