Social Media's Dilemma: What to Do About Those Racist, Violent, or Vulgar Posts

Social Media's Dilemma: What to Do About Those Racist, Violent, or Vulgar Posts

The comments section. It’s a place where you’ll find supportive messages from strangers, statements that make you LOL, and thoughts that are written out loud. I’ll admit that there are days when I intentionally avoid this area of social media. That’s because more often than not, it’s a place where you’ll also find the vulgar and gross. Some content, particularly political content, tends to draw more of those offensive messages. Most recently, survivors of the Parkland school shooting have been ridiculed on Twitter for standing up to politicians and calling for gun control.

Social media giants like YouTube, Facebook, and Twitter have inevitably experienced problematic and divisive content being shared on their platforms: A suicide victim’s body, ads from Russian bots, and fake news are some of the most recent. With an increased number of people asking platforms for more accountability, some companies have updated their guidelines to manage–and sometimes police–content. In the wake of the Logan Paul debacle, YouTube has decided to implement tougher criteria for creators to become part of its partner program and thus make it harder for videos to be monetized. Twitter recently updated their policies to ban “[c]ontent that glorifies violence or the perpetrators of a violent act.” Offending Tweets will be removed and repeat violations will result in permanent suspension of the account. At Blind, posts are community moderated and an automated system is used to take down posts that are flagged too many times.

Twitter acknowledges that in their efforts to be more aggressive about monitoring posts, “[they] may make some mistakes and are working on a robust appeals process.” This admission hints at the reality that no one knows the right way to manage content. This includes us here at Blind. What is considered offensive content? Should companies take down inappropriate posts? If they do, are they infringing upon free speech? How do we differentiate mean vs. racist or dissident vs. terrorist? Should companies be liable for what users share? The issue of policing content brings up many questions that don’t have a straight answer. Discussing the issue is worthwhile and while companies will probably never figure out the right way to handle contentious content, they can always improve their guidelines with the help of public feedback. Meanwhile, people will need to remember: The internet is not for the faint of heart.

If you have thoughts on how content should be managed, share your ideas!