After Facebook, Google announces 4 steps to combat terrorism online
Google increased its efforts to tackle terrorism online by firstly identifying the content and blocking them and also promoting content which is against terrorism and hate.
Key highlights:
- Google takes aim on terrorist content online with four steps
- Google will increase the use of technology to help identify extremist and terrorism-related videos
- Google will increase the number of independent experts in YouTube's Trusted Flagger programme.
Just three days after Facebook launched its new effort to tackle terrorism online, Google on Monday announced four steps they are taking to fight online terror. In a note published on Google's blog Kent Walker, General Counsel, Google said that terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all.
It further said that they are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online. “There should be no place for terrorist content on our services,” Walker said.
ALSO READ: Facebook asks for suggestions to tackle 'Hard Questions' on terrorism, content, censorship
TRENDING NOW
“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now,” he added.
The first step he said will be that Google will increase the use of technology to help identify extremist and terrorism-related videos. Walker said that they will devote more engineering resources to apply their most advanced machine learning research to train new “content classifiers” to help them more quickly identify and remove extremist and terrorism-related content.
“This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user. We have used video analysis models to find and assess more than 50% of the terrorism-related content we have removed over the past six months,” Walker said.
The second step is that Google will increase the number of independent experts in YouTube's Trusted Flagger programme.
“Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90% of the time and help us scale our efforts and identify emerging areas of concern,” the blog said.
ALSO READ: Google, Facebook take aim at 'fake' news
Google further said that they will expand this programme and will support them with operational grants. This allows them to benefit from the expertise of specialised organisations working on issues like hate speech, self-harm and terrorism.
“We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalise and recruit extremists,” Walker said.
The third step is that Google will be taking a tougher stance on videos that do not clearly violate their policies. It gave the instance of videos that contain inflammatory religious or supremacist content.
In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find, Walker said.
“We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” he said.
The fourth step will be that YouTube will expand its role in counter-radicalisation efforts by promoting YouTube voices against hate and radicalisation.
“Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe,” he said.
It said that this is a promising approach that harnesses the power of targeted online advertising to reach potential Isis recruits and redirects them towards anti-terrorist videos that can change their hands about joining.
“In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages,” said Walker.
Google also said that they have also recently committed to working with industry colleagues including Facebook, Microsoft and Twitter to establish an international forum to share and develop technology and support smaller companies and accelerate joint efforts to tackle terrorism online.
12:58 pm