How can the U.S. fight the spread of Islamic State propaganda? The militant groupâs infamous videos of beheadings, violence and torture have a dangerous allure for would-be radicals, and they proliferate over social media in a way that can make containment seem hopeless.
But fighting extremist content online need not be very complicated, according to Dr. Hany Farid, a computer scientist at Dartmouth College.
âWe donât need to develop software that determines whether a video is jihadist,â Farid recently told The Huffington Post. âMost of the ISIS videos in circulation are reposts of content someone has already flagged as problematic.â
âWe can easily remove that redistributed content,â he went on, âwhich makes a huge dent in their propagandaâs influence.â
Farid is referring to a technique called âhashing,â which he pioneered nearly a decade ago while battling a different but equally vile online scourge: child pornography. Hashing involves scanning the unique digital fingerprint, or âhash,â of a video or photo, making it easy to find instances of that content and remove it. Last month, it was reported that Facebook and YouTube might start using this technique to automatically scour extremist content from their platforms. If they do, they will be taking a page from Faridâs book.
In 2008 and 2009, Farid developed PhotoDNA, a hashing software that helped identify and remove images of child porn from the internet. Microsoft funded PhotoDNA, and according to Farid, itâs still used by companies like Facebook and Twitter today.
Now, Farid is adapting his software to fight the threat of ISIS propaganda. He has partnered with the Counter Extremism Project, a nonprofit think tank, which is maintaining a database of extremist media. When a photo or video is flagged as extremist propaganda on a platform like Facebook, its hash is entered in the CEP database.
The âworst of the worstâ
As Farid finalizes the software, heâs keeping a close eye on the tech industryâs changing attitudes toward extremist content. He said the debate today reminds him strongly of a decade earlier, when Silicon Valley effected a major crackdown on child porn amid public pressure.
âWhat Iâm hearing from tech companies is eerily similar to what they were saying 10 years ago,â Farid said.
Then as now, he said, the federal government was urging the tech sector to act. Then as now, the First Amendment was keeping the government from imposing stricter regulations itself. In 2006, then Attorney-General Alberto Gonzalez spurred the creation of the Technology Coalition, a group of big-name tech firms aimed at battling child pornography. (The group included Microsoft, Yahoo and AOL, which now owns The Huffington Post.) Likewise, in December 2015, President Barack Obama called on tech companies to step up in the fight against ISIS recruitment propaganda.
In both cases, Farid said, there was a focus on the âworst of the worst,â meaning the most graphic and problematic content. For child porn, this meant images that depicted children under 12 engaging in sexually explicit acts. On the counter-extremism front, this means explicit videos of beheadings, physical torture and graphic violence.
âThis is low-hanging fruit,â Farid said. âA beheading video violates the terms of service of every tech company in the world.â
Farid is quick to emphasize that just because we can never completely wipe out the problem of extremist propaganda, it doesnât mean we should stop chipping away at it. ISIS propaganda videos only seem like an unprecedented threat, he said. But they fit into an existing category, and thereâs some precedent for how to deal with them.
âWeâve quite literally done this before,â he said. âFlagging and removing the very worst content is a modest step, but it puts a significant dent in the problem.â
The parallels between the present day and the previous decade are striking, Farid said. When PhotoDNA launched, Microsoft was the first adopter, followed by Facebook in 2011, Twitter in 2012 and Google in 2013. Google was the most resistant to the idea of content moderation, citing privacy concerns, according to Farid. This time around, Microsoft again was an early supporter of anti-propaganda efforts, funding Faridâs lab with $100,000. And Facebook was the first big company to meet with him about the software, in February of this year.
The main challenge in adapting PhotoDNA to combat todayâs extremist content is the rise of videos, which are more complex and exist in more variations than static photos. There are about 24 still images in every second of a video, which makes Faridâs task that much harder.
A slippery slope?
Although Farid maintains that his work amounts to a âvery modest step,â hashing software in general has raised questions about privacy.
âThereâs a valid concern about overreach,â Vivek Krishnamurthy, assistant director of Harvard Law Schoolâs Cyberlaw Clinic, told HuffPost. âJust because a video is flagged, it doesnât mean there are no conditions under which it should circulate, like as part of a news report, or a parody.â
âPlus, as with other algorithms, the question is, is the tech working correctly?â he went on. âWe donât know enough about how these technologies [like hashing] really work.â
Still, Krishnamurthy acknowledged that the government is in a tough position. Militant propaganda is a genuine national security concern, and it canât be ignored.
âUnder the First Amendment, itâs very difficult for the government to tell anyone to take down anything,â Krishnamurthy said. âBut YouTube and Facebook are not the government.â
The government, however, seems to approve of the current course of action.
âWe welcome the launch of initiatives such as the Counter Extremism Projectâs National Office for Reporting Extremism (NORex) that enables companies to address terrorist activity on their platforms,â Lisa Monaco, the Presidentâs assistant for homeland security and counterterrorism, told The Washington Post last month.