In the wake of last weekend’s terrorist attack in London that left seven people dead, Prime Minister Theresa May has gone beyond asking social media companies to vet content posted on their sites more fully. She’s raised the specter of holding social media platforms legally accountable for facilitating the spread of terrorist ideology.
Facebook responded to May’s call for closer regulation of the internet, saying that it would take steps make its platform a “hostile environment for terrorists.”
If social media companies – most of which are based in the U.S. – know that they provide a platform that gives material support to terrorist activities, should they be held liable for any actions resulting from this support?
Stephen Vladeck, a professor of law at the University of Texas School of Law, says that this question gets at the different speech traditions in the U.S. and the United Kingdom.
“The U.K. does not have nearly the same historical or constitutional commitment to the freedom of speech that we do,” Vladeck says. “And I think that that has pros and cons. One of the cons is that we have to have freedom for the speech we hate just as much as we have freedom for the speech we love.”
Vladeck says it’s unlikely European countries will go so far as to outlaw social media platforms. Rather, they will likely take what he calls “an intermediate reaction.”
“I think we’re seeing proposals to have reporting requirements where the social media companies of the world take on some kind of affirmative obligation to not only police some of the more vile and offensive speech on their platforms, but to actually facilitate the reporting to government organizations,” he says.
Vladeck says there’s an important balance between not shooting the messenger and ensuring that these platforms aren’t used for evil.
“As much as a Facebook or a Twitter is providing a mechanism for terrorist groups to communicate, they’re also facilitating pro-democracy movements in countries with more of an authoritarian tradition,” he says.
Vladeck points to Section 230 of the Communications Decency Act of 1996 for how to distinguish between sites that facilitate the exchange of illicit material – like Kim Dotcom and the Silk Road – and sites that provide a channel of communication.
Section 230 protects online intermediaries that host or republish speech, such as Facebook and Twitter, against a range of laws that otherwise could be used to hold them legally responsible for what people say.
“The theory is that we really shouldn’t suppress the platform simply because particular messages on the platform are themselves illegal or lead to illegal activity down the road,” Vladeck says.
Written by Molly Smith.