A panel of U.K. lawmakers has described as “alarming” that social networking companies like Facebook, Twitter and Google's YouTube have teams of only a few hundred employees to monitor billions of accounts for extremist content.
“These companies are hiding behind their supranational legal status to pass the parcel of responsibility and refusing to act responsibly in case they damage their brands,” said a report released early Thursday in the U.K. by the Home Affairs Committee appointed by the House of Commons.
If these companies do not tackle the issue and allow their services to become the ‘Wild West’ of the internet, their reputation as responsible operators will be eroded, it added.
The report, which said the use of the internet to promote radicalization and terror was one of the biggest threats faced by countries including the U.K., singled out Twitter for mention for not proactively reporting extremist content to law enforcement agencies.
The committee described as a “drop in the ocean” the suspension by Twitter of 125,000 accounts worldwide linked to terrorists between mid-2015 and February 2016, and Google’s removal in 2014 of over 14 million videos worldwide that related to all kinds of abuse.
In evidence to the panel, the companies said that they had staff who manually search for potentially extremist content online and decide on whether to take the content down and suspend accounts. Twitter said it had “more than a hundred” staff working on this job, while Facebook and Google did not provide a number.
The report reflects growing concern about the use of social networks by terror groups like the Islamic State group, also referred to as ISIS, for propaganda, communications and recruitment. "We are engaged in a war for hearts and minds in the fight against terrorism. The modern front line is the internet,” said Keith Vaz, member of Parliament and chairman of the committee in a statement. “Its forums, message boards and social media platforms are the lifeblood of Daesh and other terrorist groups for their recruitment and financing and the spread of ideology.”
The lawmakers have recommended that social networking companies should have their staff co-located at the Counter Terrorism Internet Referral Unit (CTIRU), a special unit hosted by the Metropolitan Police.
“The UK Government should now enforce its own measures to ensure that the large technology companies operating in this country are required to cooperate with CTIRU promptly and fully, by investigating sites and accounts propagating hate speech, and then either shutting them down immediately, or providing an explanation to CTIRU of why this has not been done,” according to the report.
CTIRU should also be upgraded into a high-tech, central operational hub, working around the clock to locate threats early, move quickly to block them and instantly share relevant information with other security agencies, it added.
“As I made clear in my evidence session, terrorists and the support of terrorist activity are not allowed on Facebook and we deal swiftly and robustly with reports of terrorism-related content," said Simon Milner, director of policy at Facebook UK, in an emailed statement. "In the rare instances that we identify accounts or material as terrorist, we'll also look for and remove relevant associated accounts and content."
YouTube said it removes content that incites violence, terminates accounts run by terrorist groups and responds to legal requests for removal of content that breaks U.K. law.
A Twitter spokesman referred to a company blog post this month that said it had suspended 360,000 accounts for violating its policies related to promotion of terrorism since the middle of 2015. The company said in February that as other companies and experts have also noted, there isn't a “magic algorithm” for identifying terrorist content on the Internet.
Facebook, Google, Microsoft and Twitter agreed in May to a process for receiving and reviewing notifications about online hate speech from European Union users on their platforms, which could see most “valid notifications” reviewed within 24 hours, and the content even brought down or access to it disabled.
Social networking companies are also facing lawsuits in the U.S for the terrorist content found on their sites. One such lawsuit filed in a federal court in California by the father of a victim of the Paris terror attack in November charges that Twitter, Facebook and Google knowingly permitted ISIS to use their social networks "as a tool for spreading extremist propaganda, raising funds and attracting new recruits.”
The U.K. MPs also made other proposals, including a recommendation that communities must take on a leadership role, and organizations like the Muslim Council of Britain could do more to expose and remove those who preach or advocate race hate and intolerance.