How would you handle negative user feedback about YouTube, and how might you address it with the engineering team?
- Dhiraj Mehta
Over the last year or so, there has been news about how YouTubeâs recommendation algorithm, which is designed with the goal of maximizing usersâ watch time, pushes users who spend enough time on the platform toward progressively more controversial or extreme content. For the purposes of this exercise, letâs assume that this is an unintentional âby-productâ of YouTube utilizing user watch data in a content agnostic manner.
Here are some potential solutions and their associated trade-offs: Update content policy and let users be first line of defense. Right now, YouTube has guidelines for certain kinds of content that are not allowed on its platform, which it could expand to include unverified content like conspiracy theory videos. There are at least two main challenges in this regard: first, in order to be an effective update of its existing reporting mechanism, YouTube would have to educate users about what itâs looking for. This could result in an uptick of reports across the board and it could take some time to see an impact on unverified content. Second, YouTube would be proceeding further down the road being an arbiter of the content uploaded to its platform and it may see backlash from some users (a la Facebook).
Update recommendation algorithm: There are two ways I can imagine YouTube might go about this:
- Focus primarily on reducing the number of complaints. First, build a data profile of a user who is likely to complain about controversial content. Then, build a profile of a controversial video. Then, update the recommendation algorithm to avoid recommending videos in the latter category to users in the former group. This requires a degree of fairly involved data work and retooling of the recommendation algorithm, although YouTubeâs data team may have the foundations for these profiles already.
- Fundamentally alter the algo: Stop baiting users into staying longer through some kind of progressive escalation; only feed this content to users who deliberately search for certain, category-specific things (like âcraziest conspiracy theoriesâ). Anticipate and then closely monitor how this affects watch time. This could represent a significant strategic pivot and would likely require intensive research into alternative methods of getting users to stick around.
Do nothing. YouTubeâs goal is still to maximize watch time and it is willing to accept and manage the fall out. As this requires no change to YouTube recommendation engine, this is obviously the easiest solution to implement from an engineering standpoint.
Prioritization: The above approaches are in priority order. Assuming for the sake of the exercise that YouTube cares about addressing this problem, I would proceed from user reporting of content to each algorithm update in succession after tracking the impact of each previous initiative. In the first case, some layer of programmatic logic would likely have to be implemented to separate the âwheat from the chaffâ of customer flags to make the volume of flagged videos manageable.