This article exists as part of the online archive for HuffPost Australia, which closed in 2021.

How The War On Child Porn Is Helping Us Fight ISIS Propaganda

A maverick computer scientist explains how the lessons of one internet fight are applicable to another.
scyther5 via Getty Images
scyther5 via Getty Images

How can the U.S. fight the spread of Islamic State propaganda? The militant group’s infamous videos of beheadings, violence and torture have a dangerous allure for would-be radicals, and they proliferate over social media in a way that can make containment seem hopeless.

But fighting extremist content online need not be very complicated, according to Dr. Hany Farid, a computer scientist at Dartmouth College.

“We don’t need to develop software that determines whether a video is jihadist,” Farid recently told The Huffington Post. “Most of the ISIS videos in circulation are reposts of content someone has already flagged as problematic.”

“We can easily remove that redistributed content,” he went on, “which makes a huge dent in their propaganda’s influence.”

Farid is referring to a technique called “hashing,” which he pioneered nearly a decade ago while battling a different but equally vile online scourge: child pornography. Hashing involves scanning the unique digital fingerprint, or “hash,” of a video or photo, making it easy to find instances of that content and remove it. Last month, it was reported that Facebook and YouTube might start using this technique to automatically scour extremist content from their platforms. If they do, they will be taking a page from Farid’s book.

In 2008 and 2009, Farid developed PhotoDNA, a hashing software that helped identify and remove images of child porn from the internet. Microsoft funded PhotoDNA, and according to Farid, it’s still used by companies like Facebook and Twitter today.

Now, Farid is adapting his software to fight the threat of ISIS propaganda. He has partnered with the Counter Extremism Project, a nonprofit think tank, which is maintaining a database of extremist media. When a photo or video is flagged as extremist propaganda on a platform like Facebook, its hash is entered in the CEP database.

The ‘worst of the worst’

As Farid finalizes the software, he’s keeping a close eye on the tech industry’s changing attitudes toward extremist content. He said the debate today reminds him strongly of a decade earlier, when Silicon Valley effected a major crackdown on child porn amid public pressure.

“What I’m hearing from tech companies is eerily similar to what they were saying 10 years ago,” Farid said.

Then as now, he said, the federal government was urging the tech sector to act. Then as now, the First Amendment was keeping the government from imposing stricter regulations itself. In 2006, then Attorney-General Alberto Gonzalez spurred the creation of the Technology Coalition, a group of big-name tech firms aimed at battling child pornography. (The group included Microsoft, Yahoo and AOL, which now owns The Huffington Post.) Likewise, in December 2015, President Barack Obama called on tech companies to step up in the fight against ISIS recruitment propaganda.

In both cases, Farid said, there was a focus on the “worst of the worst,” meaning the most graphic and problematic content. For child porn, this meant images that depicted children under 12 engaging in sexually explicit acts. On the counter-extremism front, this means explicit videos of beheadings, physical torture and graphic violence.

“This is low-hanging fruit,” Farid said. “A beheading video violates the terms of service of every tech company in the world.”

Farid is quick to emphasize that just because we can never completely wipe out the problem of extremist propaganda, it doesn’t mean we should stop chipping away at it. ISIS propaganda videos only seem like an unprecedented threat, he said. But they fit into an existing category, and there’s some precedent for how to deal with them.

“We’ve quite literally done this before,” he said. “Flagging and removing the very worst content is a modest step, but it puts a significant dent in the problem.”

The parallels between the present day and the previous decade are striking, Farid said. When PhotoDNA launched, Microsoft was the first adopter, followed by Facebook in 2011, Twitter in 2012 and Google in 2013. Google was the most resistant to the idea of content moderation, citing privacy concerns, according to Farid. This time around, Microsoft again was an early supporter of anti-propaganda efforts, funding Farid’s lab with $100,000. And Facebook was the first big company to meet with him about the software, in February of this year.

The main challenge in adapting PhotoDNA to combat today’s extremist content is the rise of videos, which are more complex and exist in more variations than static photos. There are about 24 still images in every second of a video, which makes Farid’s task that much harder.

A slippery slope?

Although Farid maintains that his work amounts to a “very modest step,” hashing software in general has raised questions about privacy.

“There’s a valid concern about overreach,” Vivek Krishnamurthy, assistant director of Harvard Law School’s Cyberlaw Clinic, told HuffPost. “Just because a video is flagged, it doesn’t mean there are no conditions under which it should circulate, like as part of a news report, or a parody.”

“Plus, as with other algorithms, the question is, is the tech working correctly?” he went on. “We don’t know enough about how these technologies [like hashing] really work.”

Still, Krishnamurthy acknowledged that the government is in a tough position. Militant propaganda is a genuine national security concern, and it can’t be ignored.

“Under the First Amendment, it’s very difficult for the government to tell anyone to take down anything,” Krishnamurthy said. “But YouTube and Facebook are not the government.”

The government, however, seems to approve of the current course of action.

“We welcome the launch of initiatives such as the Counter Extremism Project’s National Office for Reporting Extremism (NORex) that enables companies to address terrorist activity on their platforms,” Lisa Monaco, the President’s assistant for homeland security and counterterrorism, told The Washington Post last month.

Close
This article exists as part of the online archive for HuffPost Australia. Certain site features have been disabled. If you have questions or concerns, please check our FAQ or contact support@huffpost.com.