Facebook, Twitter, Microsoft and Google’s YouTube have agreed to share with one another identifying digital information of violent terror content that they find on their platforms.
When they remove “violent terrorist imagery or terrorist recruitment videos or images” from their platforms, the companies will include in a shared industry database the hashes, or unique digital fingerprints, of the content.
Other participants can use the shared hashes to help identify matching content on their hosted consumer platforms, review against their respective policies and definitions, and remove the content when appropriate, according to a statement by the companies on Monday.
“There is no place for content that promotes terrorism on our hosted consumer services. When alerted, we take swift action against this kind of content in accordance with our respective policies,” the companies said.
The move comes shortly after members of the European Parliament from the Civil Liberties Committee voted Monday to back stricter initiatives to counter terrorism, including measures to take down and block content that constitute public provocation to commit terrorism. On Thursday, the EU Internet Forum, a private-public partnership set up last year with internet companies, is also meeting to take stock of measures to reduce online terrorist content.
Terrorist organizations like the Islamic State group are believed to use social media for propaganda, communications and to attract new recruits.
Google, Facebook, Microsoft and Twitter agreed with the European Union in May to remove from their websites illegal hate speech, visible in Europe, that could incite hatred or acts of terror. But in August, a panel of lawmakers in the U.K. said it was ”alarming” that social networking companies have very small teams to monitor billions of accounts for extremist content.
On Sunday, the European Commission said internet companies, including Facebook, Twitter, YouTube and Microsoft, will have to act faster to counter online hate speech or face laws forcing them to do so, according to news reports.
In a study of the companies’ processes to remove hate speech that broke the law or breached their community standards, only 40 percent of complaints received a response within 24 hours, and 43 percent within 48 hours, according to a Commission official.
Overall, 28.2 percent of the complaints resulted in removal of the content—but there were large disparities in the removal rate, depending on the country where the complaint was made, the study found. In France and Germany, it topped 50 percent, but in Austria only 11 percent of complaints resulted in content removal, and in Italy just 4 percent. The Commission plans to publish the study Tuesday.
No personally identifiable information will be shared when hashes are passed among the companies, according to the statement. The four companies will begin sharing hashes of the “most extreme and egregious terrorist images and videos” removed from their services, which they describe as content most likely to violate all their content policies.
A recent study by the Global Network Initiative, a group that represents academics, investors, civil society organizations and companies, has cautioned against the human rights risks of censoring online terror content.
Internet companies acting as intermediaries should not be required to monitor third-party terrorist content that they host or transmit, or face direct or indirect liability from governments for such content, according to the study. Such requirements and liabilities may lead companies to go overboard in the removal of controversial content, according to the report.
The companies don’t expect to share hashes until early next year after the technical work is completed. They are also open to including other firms that are interested in joining the initiative in the future.
[With additional reporting by Peter Sayer in Paris]