Internet’s democratizing power has been a blessing for individuals, activists, and small businesses across the world, while those with negative minds and goals have usurped its advantages to their own ends. In the 1980’s, White supremacists used electronic bulletin boards, and the mid-1990’s saw the first pro-al-Qaeda website. Counterterrorism measures taken online may not be new, but must evolve urgently as digital platforms have become central to our lives. Facebook counterterrorism team understands this urgency and acts likewise.
Terrorism can be defined as, “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”
This large definition adopted by Facebook is inclusive of religious extremists and violent separatists to white supremacists and militant environmental groups and many more such violent-minded goals.
Facebook’s counterterrorism policies do not apply to governments of nation-states, who may legitimately use violence under necessary circumstances. This doesn’t include certain content around state-sponsored violence, or pro-political agenda, which would be removed under other policies, such as graphic violence policy.
Facebook policies do not allow terrorists using the services, and the enforcement is quite active. Facebook’s newest detection technology focuses on ISIS, al-Qaeda, and their affiliates – groups posing the broadest global threat – based on algorithmic tools.
Propaganda can now be detected quickly and at scale, thanks to newest Detection technology implementation via the counterterrorism team, consisting of 200 people.
More and more terrorizing content is getting removed. In the Quarter 1 of 2018, Facebook took action on 1.9 million pieces of ISIS and al-Qaeda content, about twice as much from the previous quarter. 99% of such Facebook ISIS and al-Qaeda content was not user reported. Alongside the use of advanced technology, help from internal reviewers was also solicited in this regard. Sometimes, for a small percentage of Facebook terrorist content where users report a profile, Page, or Group – the entire profile, Page or Group is not removed because as a whole they do not violate its policies, but specific contents breaching its standards are removed promptly.
Since terrorizing Content uploaded to Facebook tends to get less attention the longer it’s on the site – Facebook has prioritized the identification of such newly uploaded material. In the Q1 2018, the median time for such removals, based on both user reports and content found by Facebook, was less than one minute.
Facebook’s specialized techniques to find and remove older content helped remove 600,000 such pieces. In the Q1 2018 Facebook’s historically focused technology found 970 days old contents and removed them for breach of terms. Facebook understands that its currently undertaken measures aren’t enough, and intends to grow it into a more and more advanced measure for counteracting evolved threats.