Google Self-Radicalization and Hate Speech Public Policy Recommendations

I) ONLINE SELF-RADICALIZATION

With the increase of self-radicalization and extremism on social media platforms, Google is taking measures to mobilize a unified effort from the tech industry to counter these trends. While Google is seeking advice from NGOs and counterterrorism firms, its solutions remain tech-driven, with the current hate speech video analysis models often failing to take into account the range of models of self-radicalization. Indeed, the socioeconomic, cultural and political reasons for a 13 year old’s online self-radicalization in Denmark could not be more different from that in Nigeria, Denmark or France, a subtlety difficult to account for within a single algorithm.

Google is thus ultimately limiting the impact of its counterterrorism measures by placing the brunt of its resources on fighting both hate speech and terrorist communication channels instead of understanding the need for self-radicalization.  Most of Google’s resources are being placed into creating algorithms to identify and delegitimize profiles of extremists- yet these counter-extremist measures also tend to take a surface level reading of extremism as Islamic Radicalism. This reading further alienates the Muslim community by pegging Muslims as suspect, inadvertently increasing the potential for self-radicalization. The New York Times recently found a direct correlation between anti-Muslim searches and anti-Muslim hate crimes, suggesting that countering violent extremism on Google’s platform should focus not on removing individual inflammatory videos, but instead shifting its autofill and suggested video algorithms to be more inclusive to the Muslim community, to limit the hate speech echo chamber, and to decrease the perceived need for self-radicalization.

  1. II) GOOGLE’S CURRENT PUBLIC POLICY

The SVP and General Counsel of Google Kent Walker recently made a statement outlining steps to decrease self-radicalization online. These measures would be led by a video analysis model tracking words and symbols related to radicalizing content. This would then be coupled by a Trusted Flagger program consisting of both Youtube users and expert NGOs flagging both problematic videos and users. Videos that do not explicitly breach Youtube’s Terms of Service on hate speech would be placed in a limited state as not to be monetized. Google’s in-house think tank Jigsaw would also analyze keywords for self-radicalization (i.e, Baqiyah wa Tatamadad), then use AdWords targeting tools through the Redirect Method to push potential recruits towards less polarizing content. If Google’s algorithms find images or names linking users to terrorist groups, the US state department will grant full access into their profiles in order to track their internal messaging and location. Finally, image-matching technology developed by Google’s engineers will prevent the re-uploading of terrorist content.

In addition to improving its own public policy, Google is also summoning the tech industry to join forces. Youtube, Facebook, Microsoft and Twitter recently mobilized to create the Global Internet Forum to Counter Terrorism (GIFCT), a much needed solution to limit the communication methods of terrorists on their respective platforms through in depth research and technical solutions. This effort further manifested itself in the Tech Against Terror Initiative’s Knowledge Sharing Platform propelled by the GIFCT and the UN Counter-Terrorism Committee Executive Directorate to educate smaller companies on decreasing the proliferation of violent extremism on their platforms. This knowledge sharing has manifested itself through the hash system, where tech companies can remove matching extremist media on multiple platforms to decrease its proliferation. Google is also outsourcing its research on manifestations of self-radicalization and violent extremism by providing research grants to counter-extremist organizations such as Institute of Strategic Dialogue. Finally, Google is engaging in grassroots efforts to resist radicalization through community youth projects.

III) PROBLEMS WITH GOOGLE’S CURRENT POLICY

While these policies are effective for removing videos of English speaking jihadist recruits like Anwar al-Awlaki and Adam Yahiye Gadahn, these policies focus too heavily upon removing extremist material instead of studying differential regional methodologies and reasons for self-radicalization. These measures are also limited by the fact that most Youtube users will not explicitly preach sympathy to ISIS, Al-Qaeda, Boko Haram, or other terrorist organizations on their platform. Such emphasis on suspending accounts thus remains unsustainable due to the sheer volume of videos uploaded hourly, suggesting the need for resource reallocation towards fighting self-radicalization instead of merely removing extremist content. This policy is also limiting its reach through its lack of unified definition of ‘extremism.’ Since most of the video analysis is carried out by volunteers and contractors without proper training outlining various manifestations of extremism, individual biases and knowledge gaps can pigeon-hole extremism to solely Violent Islamic Extremism, actively making the Muslim community feel excessively surveyed thereby increasing alienation and potential for radicalization.

Jigsaw’s Redirect Method faces its own set of problems from this overemphasis on Islamic Extremism and its surface-level understanding of varieties within Islam. While it successfully redirects users away from self-radicalizing content, users are redirected towards imams denouncing Isis’s corruption of Islam or testimonials from former extremists. This becomes problematic for a few reasons. Firstly, many jihadists converted to Islam during their radicalization, as seen with Christian Ganczarski, Germaine Lindsay and Dhiren Barot, or France’s Al-Qaeda foreign fighters featuring 25% of converts. In other words, converts tend to have a higher probability of becoming jihadists than Muslims as they may obtain a base understanding of Islam simply to join Islamic terrorist groups. The denunciation of ISIS’s corruption of Islam will therefore not be a deterrent for someone interested predominantly in the terror component of Islamic terrorist organizations. Secondly, Kent Walker outlined the importance of these counternarratives in combating extremism. Yet counternarratives become dangerous, as they assume a single narrative of Islam. When directing people away from anti-Muslim videos, are they redirected to educational videos about Islam? If so, what strand of Islam becomes authoritative, Shia, Sunni, Salafi? While this Redirect Method may detract people from self-radicalizing content, its efforts are limited by both its restrictive understanding of users’ reasons for radicalization and the various manifestations of Islam.

The Redirect Method’s issues also tie into issues within the suggested videos algorithm. When using search terms like “kill Muslims,” the Redirect Method has successfully decreased the availability of self-radicalizing content. Yet the content on the suggested videos remains extremely polarizing, often related to conspiracy theories and inflammatory content –albeit not extremist- instead of redirecting users to neutral content.

Google’s public policy efforts to counter extremism also fail to incorporate its users in its solution, as seen with the lack of transparency in the video flagging process. Flaggers do not receive an email giving them deeper insights into the process- are these flagged videos analyzed by people or machine learning? What is the timeline for this analysis? Is this analysis regional or is it outsourced? Will the flagger be notified if the video is taken down? This lack of transparency ultimately hurts the user-experience, increasing the potential for alienation from the platform if users feel affected by rhetorical violence and not supported by Google’s methodology.

Finally, Google’s grassroots efforts to fight self-radicalization in youth communities become problematic if not implemented on a large scale. If these programs are only identifying ‘at risk’ communities, they run the risk of generalizing entire neighborhoods as suspect based off of a set of socioeconomic and religious characteristics. If these programs are only implemented in predominantly Muslim communities, it once again reinforces alienation of the Muslim community and inadvertently increases potential for self-radicalization.

IV) POLICY RECOMMENDATIONS

While Google’s public policy efforts are a step in the right direction, resources should be reallocated from merely taking down violent extremist rhetoric towards limiting self-radicalization in the first place. Selection bias becomes a significant factor in Google’s searches- in November 2015, “I hate Muslims” sported 3,600 searches in the United States and 2,400 for “kill Muslims.” Similarly, before the San Bernardino attack, hate searches hovered around 20%. After the attack, over half of the searches were hate-related. Google’s counter-extremism policies should focus on this group seeking an echo chamber, ultimately increasing their polarization through groupthink. By placing the Redirect Method on this group, this would greatly decrease the potential for future anti-Muslim hate videos by tracking geographical and cultural trends of hate-speech searching. This echo chamber would also be greatly limited by ascertaining the autofill and suggested video algorithm does not direct users to further inflammatory videos.

Google’s users should also be better incorporated into the flagging process through increased transparency outlining the methodology and timeline of analyzing a flagged video. While this will not necessarily affect whether or not the video is taken down, it will make Google’s users feel better incorporated into Google’s business model. The flagging button should also develop a hierarchy providing users with the ability to flag videos differentially according to its perceived risk to the public. If videos do not breach Youtube’s Hate Speech Terms of Service but have a large volume of high-risk flagging, an algorithm could be developed to place those videos further down the search as not to be easily accessible.

Google would also greatly benefit from working with faith group leaders to display its community outreach efforts to integrate rather than alienate. Google could create a partnership with Omar Suleiman, the Muslim American imam from Dallas and founder of the Yaqeen Institute for Islamic Research. Suleiman attempts to present accurate depictions of Islam on Google in order to decrease anti-Muslim hate speech and self-radicalizing content, as seen with his reports breaking down misunderstandings of Islamic concepts such as the taqiyya- a concept often taken up by Islamophobes for anti-Muslim propaganda. By partnering with community leaders such as Suleiman in directing its tech solutions, Google would develop a deeper understanding of the religious, socioeconomic and regional manifestations of self-radicalization and hate speech that would exponentially increase the power of its video analysis algorithms and counterterrorism measures. This would also help the success of Google’s community building efforts to combat self-radicalization, as it would tell these communities that they are not being unfairly surveyed as potential jihadists merely due to their religion. Instead, it would tell them that Google seeks to incorporate marginalized communities both through its technical algorithms and through its physical grassroots efforts.

 

css.php