GIFCT: Possibly the Most Important Acronym You’ve Never Heard Of

Dr. Courtney C. Radsch
11 min readSep 30, 2020

--

The Global Internet Forum to Counter Terrorism (GIFCT) logo

GIFCT may be the most important acronym you’ve never heard of, and it is posed to change the way the internet is governed in radical ways.

GIFCT stands for the Global Internet Forum to Counter Terrorism and represents a novel approach to governing the internet by centralizing content moderation in a single entity, whose decision-making is based on collaboration between the tech sector and governments. Although this industry-led effort began in response to ISIS’ use of social media and was ostensibly intended to focus only on terrorism, its remit has already expanded. And governments like what they see.

When New Zealand Prime Minister Jacinda Ardern called for social media reform in the wake of the deadly 2019 anti-Muslim attacks in Christchurch that were live streamed in a made-for-the-internet attack, the world rallied around her. More than 50 governments and the world’s most influential social media platforms signed the Christchurch Call, pledging to “eliminate terrorist and violent extremist content online.” The GIFCT has become the primary vehicle for implementing the pledge.

Content moderation has become one of the preferred ways of combatting terrorism and countering violent extremism, although it does little to address the root causes or complex political factors behind the ideologies. Governments have rallied around voluntary industry efforts, even as they seek faster, more effective mechanisms to combat online virality without addressing the fundamental logic that incentivizes it. And the use of the terms “terrorism” and “extremism” in these contexts often are undefined and open to interpretation. Furthermore, the infrastructure and governance models being created set a powerful precedent that could be co-opted into the service of removing other objectionable content, from hate speech to “fake news” about the coronavirus to self-harm videos.

Last week, the social video platform TikTok sent a letter to the heads of several other platforms proposing the creation of a “global coalition to protect against harmful content” and suggesting they establish a shared “hashbank” of violent, graphic content. The proposal came after the company released its first transparency report that appeared to find the scale of harmful content overwhelming, most recently relating to a viral suicide video the company struggled to remove.

The dangers posed by poorly-defined, overly broad definitions and insufficiently calibrated moderation policies and practices have been well-documented and discussed for years, from their impact on journalism to the removal of tens of thousands of videos and news outlets from the war in Syria. The same goes for the difficulty of automating contextual review, the potential for government co-optation and manipulation, and the impact on important and protected content. Journalism, for example, is routinely caught in the net as promoting or glorifying terrorism. As data from the organization where I work, the Committee to Protect Journalists, shows all too well, most journalists who are jailed by autocratic regimes are detained on charges like supporting terrorism.

The GIFCT raises questions about who decides what counts as terrorism or the even more loosely defined extremism. Despite the lack of consensus on what these terms mean, the Organization for Economic Cooperation and Development (OECD), is attempting to standardize platform transparency reporting on terrorism and violent extremism content, or TVEC. Spurred by Australia and New Zealand, the OECD is creating a protocol to simplify and harmonize reporting on TVEC, another acronym that is taking on a life of its own. The hope seems to be that this voluntary protocol will set a minimum standard that will be adopted by platforms of all sizes. And the GIFCT is the likely venue for operationalizing the protocol.

From a Shared Database to a Corporate-Backed NGO

In December 2016, Facebook, Microsoft, Twitter, and YouTube created a shared database of hashes, essentially digital fingerprints, to help the companies automatically detect and block ISIS-related content across their platforms. They provided proof-of-concept that the platforms could coordinate removal of content across their services, reducing the availability and virality of objectionable content. The hash database may have begun as a voluntary, temporary solution, but as the old Russian proverb states, there’s nothing more permanent than a temporary solution.

Over the past year, the GIFCT has established itself as a stand-alone non-profit organization that will serve the platforms and help them implement directives coming from the U.S., Europe, Australia, and New Zealand demanding that they remove extremist content. It is funded and led by the company members, who recently hired an executive director with a counterterrorism background, with an independent advisory committee (IAC) that includes governments and academics, but it is conspicuously missing many of the civil society groups that have been focused on the nexus of human rights, terrorism, and content moderation for years. The criteria for government members of the IAC requires that they be members of the Freedom Online Coalition, but the shortcomings of that multilateral initiative have not assuaged the concerns of human rights groups that governments with little respect for human rights like freedom of expression won’t gain access.

In the three years since its founding, the GIFCT has grown to include a dozen platforms and engagement with more than 120 tech companies, including smaller platforms that can’t possibly monitor and moderate all the content flowing through their channels. It is much easier to just plug into the database and ensure that whatever content has been flagged as problematic is removed from their platforms or even blocked before it can be posted. As one small company put it at a 2019 event announcing the launch of the new NGO on the sidelines of the U.N. General Assembly, they just want a plug-and-play list of hashes because they don’t have the capacity to do content moderation themselves. Other companies, like TikTok, could find a ready-made coalition with the database and infrastructure to coordinate removal of graphic content across platforms. This could be especially compelling for companies that are not plugged into the broader community that has been working on these issues for years.

The hash database has grown to include more than 300,000 unique hashes representing about 350,000 images and 50,000 videos, according to the GIFCT’s most recent transparency report. The vast majority, 72 percent, of these fell into the “glorification of terrorist acts” category. This is one of the most ambiguous categories, and none of the associated content is available for independent review or audit, either by regulators or researchers, making it difficult to know what collateral content may be affected. Furthermore, the companies claim they are not permitted to maintain a database of affected content because it could abrogate the European Union’s privacy law, known as the General Data Protection Regulation ( GDPR). Lawmakers must disabuse them of this notion and find a way to reconcile the need for independent oversight and review with privacy rights.

Companies can also share URLs with each other to flag problematic content for review by the responsible platform. Nearly 24,000 URLs have been shared, the majority of which originated from the SITE intelligence firm. And while companies can flag disagreement, hashes can only be added, not redacted, so the database is only able to expand, not contract. Yet we know little about their internal processes, the definitions they use, or their error rates.

Just as the database has expanded, so has the GIFCT’s mission. It has swelled from focusing on ISIS and al-Qaeda to preventing “terrorists and violent extremists from exploiting digital platforms.” The hash database initially included only content related to organizations on the United Nations Security Council’s consolidated sanctions list. But in implementing their voluntary Christchurch pledge, the platforms came together to figure out how to stave virality during extreme events — giving rise to the “ Content Incident Protocol.” This, of course, got its own acronym, CIP, as a process to activate cross-platform takedowns.

The pressure to expand the GIFCT’s remit to include violent extremism more broadly has mounted as the visibility of ISIS has faded and that of right-wing extremism has risen.

Of the 70 assessments initiated, just two triggered the CIP, both for right-wing extremist groups, which account for nearly 9 percent of content in the hash database. The uptick in attacks livestreamed online has also spurred governments and companies involved to figure out how to more systematically address right-wing extremism through the GIFCT. Critics have raised concerns about the focus on Islamist terrorism, but are also wary about expanding GIFCT’s remit, given what many in civil society see as its lack of legitimacy to make content moderation decisions with the potential to affect the entire world. Dominated by U.S.-based platforms with a governance structure that limits meaningful multi-stakeholder oversight and opaque decision-making, these “ content cartels “ lack accountability even while exerting tremendous influence on the public sphere.

Experts have raised concerns about how the international legal principles of necessity and proportionality are interpreted in the context of countering violent extremism online. But there has been little meaningful discussion about the acceptable error rate in striving for eradication. In a meeting with New Zealand’s Ardern that I attended as a member of the Christchurch Call advisory network, she did not respond to my question about how much collateral damage in terms of legitimate journalism, human rights documentation, and protected speech was acceptable as a byproduct of such eradication efforts. But as the GIFCT gains increasing importance, these issues must be addressed.

An Expansive Mandate

The expansion of the hash database to include additional categories of objectionable content through the CIP exemplifies the fact that once a technological capacity is created for one purpose, it can be deployed for others, a concern I and others raised with the founding companies prior to the launch of the hash database. I was told not to worry; it would only be used for extremely limited purposes. This is no longer the case.

Ultimately the companies will need lists of terrorist and extremist organizations and prohibited content categories that represent the consensus of governments or multilateral organizations to help them implement the process, according to company representatives. This type of public-private “partnership” should raise red flags and must be accompanied by robust oversight, transparency, and accountability mechanisms that include the right of those affected to seek meaningful redress and remedy (points civil society experts have made in letters to policymakers and the GIFCT).

Of equal concern is the precedent that is being set by governments taking advantage of this industry-led content-moderation approach. New Zealand is leveraging its moral authority to push platforms to eradicate violent extremism from the internet, joining a host of other countries that have been pursuing similar aims, including the U.K., Australia, France, and the European Union. These governments seem to see the GIFCT and the OECD “TVEC” process as creating an opportunity to push for coordination, rapid response, and preemptive filters.

Conscripting the GIFCT for Domestic Legislation?

Platforms add hashes to the database based on violations of their terms of service, not necessarily because the hashes violate a specific law. The lack of specific legal frameworks, however, appears about to change. A rash of domestic laws aimed at holding platforms responsible for removing illegal or unwanted content have been passed or are under consideration in several countries, and the GIFCT is poised to be conscripted into these efforts.

Germany’s NetzDG law mandates 24-hour takedowns of illegal content on large social media platforms but has been criticized for its failure to prevent re-uploads as well as its impact on protected speech. The hash database could potentially be a solution. This past May, the New Zealand government proposed domestic legislation that criminalizes livestreaming “objectionable content,” imposes fines on content providers that do not comply with take-down notices, and compels some previously voluntary behavior.

In 2019, the Australian parliament rushed through legislation that criminalizes the sharing of violent abhorrent multimedia material online, and highlighted the need to use the GIFCT’s URL-sharing consortium as “ broadly as possible.” This is particularly concerning in light of government censorship that deemed an article about ISIS recruiting that was published on one of the country’s top news sites as “promoting terrorism,” forcing the outlet to remove the offending article, even though the self-regulatory press council had determined it was in the public interest. The article could potentially be available outside of the country or on the Internet archive, though if it were shared through the GIFCT, Australia could essentially expand its censorship globally.

In the U.K., the government is developing a new regulatory framework to mitigate “online harms” by imposing a “duty of care” on platforms based on the perceived failure of voluntary initiatives. If a voluntary database that could identify terrorist content exists and is not used, it would not be far-fetched to expect that this could be seen as abrogating a platform’s duty of care.

Meanwhile, the European institutions involved in negotiating the Regulation on Preventing the Dissemination of Terrorist Content Online are considering filters to detect terrorist content and prevent it from being uploaded or re-uploaded, in the first place. These efforts to require platforms to take greater responsibility are underway even as European lawmakers grumble about the dominance of U.S. tech firms and pursue antitrust measures aimed at weakening their influence. Any legislation that does not include clearly defined terms, appropriate legal guidance, and judicial oversight is doomed to empower the very companies that are criticized for dominating the information ecosystem.

The GIFCT further entrenches the power of the Silicon Valley firms that account for the lion’s share of user attention and engagement online worldwide, and it lays the groundwork to compel much wider cooperation while pushing censorial efforts to a wider range of platforms. Calls for such coordination represent a significant shift in how the internet has been governed.

And despite its initial efforts at transparency, the GIFCT has a long way to go. It should list its members publicly (as the language on its website is ambiguous) and include a list of companies that have access to the hash database; it must produce more detailed transparency reporting on a regular basis, and it should maintain links to previous reports on its transparency page rather than replacing earlier versions with more recent updates; and it should maintain a public record of updates to its mission or other governance documents. As a member of the Transparency Working Group, I’m pushing for greater granularity and for audits by independent researchers of the content that is represented in the hash database. Lawmakers must ensure that their legislative and regulatory efforts mandate human rights impact assessments; independent oversight, review and audit of the content hashed by the database; and meaningful transparency reporting.

Centralizing Content Moderation and Control

Companies are developing this system of coordination for identifying certain types of content amid government pressure to address the COVID-19 “infodemic,” too, to stop the spread of disinformation that harms public health, and to curb disinformation originating in foreign influence operations that undermine democratic elections. Some policymakers have further raised the need to address antisemitic and banned organizations as well as their symbols in the context of the GIFCT.

The GIFCT gives governments an even more central role in determining the rules for online content. With its focus still primarily on terrorism and violent extremism that claims links to Islam, it risks solidifying the structural oppression of Muslim and Arabic voices, as predominately white American, European, and Australasian governments make decisions that affect the ability of Muslims, Arabic speakers, and Brown people throughout the Middle East and North Africa to be heard and visible on these platforms.

The centralized, coordinated approach exemplified by the GIFCT and TVEC processes could have profound repercussions on the future of an open, interoperable, and free internet. No longer is internet freedom and an uncensored internet the guiding principle of internet governance or of the values embodied by Western governments that traditionally promoted them. Rather, GIFCT envisions a walled garden of private platforms that share the same politics and set the terms for competition. Small companies could be compelled to join or comply with the GIFCT, while companies could be “voluntold” by governments which types of content to hash.

Companies attest that the hash database is strictly voluntary, and that a government could not require a hash be added to the database. Well, maybe not yet, but just wait. “Voluntary” requirements that inflict liability and include threats of fines or jail time are likely to compel compliance.

Originally published at https://www.justsecurity.org on September 30, 2020.

--

--

Dr. Courtney C. Radsch
Dr. Courtney C. Radsch

Written by Dr. Courtney C. Radsch

Postdoctoral fellow at UCLA institute for Technology, Law & Policy and Director of the Center for Journalism and Literacy at Open Markets Institute

No responses yet