Facebook Inc said on Monday that it removed or put a warning label on 1.9 million pieces of extremist content related to ISIS or al-Qaeda in the first three months of the year, or about double the amount from the previous quarter.
Facebook, the world’s largest social media network, also published its internal definition of “terrorism” for the first time, as part of an effort to be more open about internal company operations.
The European Union has been putting pressure on Facebook and its tech industry competitors to remove extremist content more rapidly or face legislation forcing them to do so, and the sector has increased efforts to demonstrate progress.
Of the 1.9 million pieces of extremist content, the “vast majority” was removed and a small portion received a warning label because it was shared for informational or counter-extremist purposes, Facebook said in a post on a corporate blog here.
Facebook uses automated software such as image matching to detect some extremist material. The median time required for takedowns was less than one minute in the first quarter of the year, the company said.
Facebook, which bans terrorists from its network, has not previously said what its definition encompasses.
The company said it defines terrorism as: “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”
The definition is “agnostic to ideology,” the company said, including such varied groups as religious extremists, white supremacists and militant environmentalists.
This article orinally appeared on Reuters.com