Katie O’Donovan, Director of Government Affairs and Public Policy, Google UKsupplementary written evidence (FEO0125)

 

House of Lords Select Committee on Communications and Digital inquiry into Freedom of Expression Online

 

 

Thank you for inviting me to give evidence to the Communications and Digital Committee’s inquiry into freedom of expression. This is a critically important and topical area of debate, particularly following the recent publication of the Government’s draft Online Safety Bill. I welcomed the opportunity to contribute to your inquiry. During the session, I offered to write to the Committee with further information on some of the topics raised, which you will find below.

 

How Google supports freedom of expression

 

At Google, our mission is to organise the world’s information and make it universally accessible and useful. We have always been a home for freedom of expression, both by providing our users with access to this information, but also by connecting them with other people and communities around the world.

 

We believe the internet has had an immensely positive impact on society but we also recognise that online platforms, government and civil society have a responsibility to work together to defend freedom of expression and protect users from harm.

 

YouTube’s policies on Covid-19 misinformation

 

At YouTube, we have crafted our policies to explicitly prohibit content that may cause harm to individuals. Since the outbreak of the Covid-19 pandemic, we took action to broaden these policies to include medical misinformation to protect our users. As such, we do not allow content that contradicts local health authorities’ (here, that is the NHS) or the World Health Organisations (WHO) medical information, including around treatment, prevention, diagnosis, transmission, social distancing, or the existence of Covid-19.

 

As we discussed during our evidence, YouTube took the decision to remove videos from an event featuring Governor Ron DeSantis of Florida in March 2021, which violated our Covid-19 medical misinformation policies. We removed these because they included content that contradicts the consensus of local and global health authorities regarding the efficacy of masks to prevent the spread of Covid-19.

 

We do allow videos that may challenge our content policies to remain on the platform if they contain sufficient educational, documentary, scientific or artistic content (EDSA). However, we require the context to be in the imagery or audio of the video itself - in other words, it must be clear to the viewer that the creator’s aim is not to promote or support the concept that violates our policies. EDSA exceptions are a critical way we make sure that important speech stays on YouTube, while protecting the wider YouTube ecosystem from harmful content.

 

Illegal content, including child sexual exploitation and abuse

 

We take the fight against child sexual abuse material (CSAM) very seriously, and preventing this content from appearing on our platform is a critical priority for us. Google has pioneered industry-leading initiatives to detect and report the presence of CSAM on our platforms and to prevent users from finding such content through its platforms. If we become aware of CSAM in Search, the offending URL is removed from our search index and reported to the National Center for Missing and Exploited Children (NCMEC). If we detect a CSAM-seeking query on Search, we display a warning message that viewing and sharing CSAM is illegal, and provide links to resources to help users report this content to the IWF, along with information from relevant NGOs that can provide appropriate support, including Childline[1] for victim support and Stopit now[2] for those who are concerned about their own behaviours. We also provide information on how to report to the police for those concerned about immediate safety of a child.

 

In addition to the steps outlined above, our algorithm is designed to block all pornography for queries that are understood by the Google’s Search algorithm to be seeking CSAM; to eliminate results for any query where those results are detected to be sexualizing children; and to not show images of children on any adult porn seeking queries, to avoid any association between children and pornograhy.

 

We also work closely with third parties, including some websites that host pornography, to provide them with the tools necessary to identify and remove CSAM content. Technology that we have developed and we make available to third parties include Google Content Safety API,[3] a classifier that uses programmatic access and artificial intelligence to help our customers classify and prioritize billions of images for review, and the CSAI Match,[4] which allows users to identify videos containing known CSAM.

 

I hope the above information is of use to the Committee’s inquiry. I would also like to thank you again for inviting us to give evidence, and for the Committee’s continued interest in protecting freedom of expression online.

 

 

11 June 2021

2

 


[1]              https://www.childline.org.uk/

[2]              http://seek-help.stopitnow.org.uk/

[3]              https://static.googleusercontent.com/media/protectingchildren.google/en/static/pdf/content-safety-api.pdf

[4]              https://www.youtube.com/csai-match/