1. Google welcomes the opportunity to submit to this important inquiry. COVID-19 has transformed our lives, economic activity and the way political institutions operate. We are keen to play our part in the nation’s response to these challenges - Google has had a significant role to play during this period, providing data and insight to public institutions through our COVID-19 Community Mobility Reports; helping people find critical and authoritative NHS information; supporting learning through our partnership with the Department for Education and the Oak National Academy; and helping Government and businesses with over £25 million in ads credits.
2. As people spend more time online, Google is working closely with government, law enforcement, children’s groups and online safety experts to assess and respond to any increased risk of online harms that citizens and users may face. Keeping people safe on our platforms is our utmost priority and, to this end, we continue to liaise closely with the Home Office, NCA and a wide range of expert child and online safety organisations to ensure we are operating on the best information.
3. This submission provides an overview of Google’s view on the role of an independent regulator for online harms, and then focuses primarily on those harms that are within the purview of Home Office, namely child sexual exploitation and abuse (CSEA), extremism, terrorism and hate speech. It also provides information on Google’s response to COVID-19 related misinformation, scams and fraud.
Support for Online Harms Regulation
4. Google welcomes the debate on regulatory oversight in the UK, including the role of an independent regulator in tackling online harms. We recognise that the scrutiny of lawmakers and regulators has contributed to the internet’s vitality, for example through competition and consumer protection laws, advertising regulations, and copyright rules. We support the UK Government’s twin goals to make the UK the safest place in the world to be online and the best place to start and grow a digital business.
5. We believe the internet, and our tools, have an immensely positive impact on society. Google’s tools boost creativity, productivity, learning and access to information. They have enabled economic growth, boosted skills and opportunity, and, along with other digital technologies, they contribute to making social and economic activity possible during the lockdown.
6. We also recognise that real-world harm can exist online. As such, we have not waited for regulation to address problematic content. We have made significant investments in technology and human resources to reduce online harms (see e.g. paras. 16-19) and continue to engage with policymakers in the UK on the appropriate oversight for content sharing platforms, such as social media and video sharing sites.
7. It is our belief that the best regulatory approaches are rooted in the following principles:
● Clarity: It is important for governments to draw clear lines between legal and illegal speech, based on evidence of harm and consistent with norms of democratic accountability and international human rights. Without clear definitions, there is a risk of arbitrary or opaque enforcement that limits access to legitimate information.
● Suitability: Oversight frameworks should recognise the different purposes and functions of online services. Rules that make sense for social networks, video-sharing platforms, and other services that are primarily designed to help people share content with a broad audience may not be appropriate for search engines, enterprise services, file storage, communication tools, or other online services, where users have fundamentally different expectations and protections. Different types of content may likewise call for different approaches.
● Transparency: Meaningful transparency promotes accountability. We launched our first Transparency Report more than eight years ago, and we continue to extend our transparency efforts over time. Done thoughtfully, transparency can promote best practices, facilitate research, and encourage innovation, without enabling abuse of processes. All our latest Transparency Reports can be accessed online.
● Flexibility: Google has pushed the boundaries of computer science in identifying and removing problematic content at scale. These technical advances require flexible legal frameworks, not static or one-size-fits-all mandates. Likewise, legal approaches should recognise the varying needs and capabilities of startups and smaller companies so as to ensure they do not hinder the next generation of companies from flourishing.
● Overall quality: The scope and complexity of modern platforms requires a data-driven approach that focuses on overall results rather than anecdotes. While we will never eliminate all problematic content, we should recognise progress in making that content less prominent. Reviews under the European Union codes on hate speech and disinformation offer a useful example of assessing overall progress against a complex set of goals.
● Cooperation: International coordination should strive to align on broad principles and practices. While there is international consensus on issues like child sexual abuse material (CSAM), in other areas individual countries will make their own choices about the limits of permissible speech, and one country should not be able to impose its content restrictions on another.
8. We believe the UK has an exceptional opportunity to develop world-leading regulatory standards. We look forward to continued engagement with the Government and Parliament to ensure a fair and effective regime.
Protecting Children and Others from Online Harms
9. Child safety is a critical priority at Google, and CSEA has absolutely no place on our platforms. We seek to prohibit the use of our services to facilitate the spread of CSAM, and deter the use of our platforms to exploit children. When we discover illegal CSAM, we remove and report the content to the appropriate authorities, and terminate the user account. Preventing child sexual abuse is an ongoing fight, and government, law enforcement, NGOs, the public at large and industry each have a vital part to play.
10. We are working closely with each part of this ecosystem to understand how the risks associated with the lockdown conditions may be manifesting, and consequently how best to respond. These groups are taking proactive measures to share information and best practices to put child safety first given the unprecedented conditions we find ourselves in. This cooperation is critical.
11. Google is in regular contact with the specialist teams at the Home Office and NCA CEOP, two of the most important state bodies working on tackling CSAM. The Home Office continues to coordinate the close working relationship between the Five Eyes countries and those companies that jointly created the Voluntary Principles to Counter Online Child Exploitation and Abuse (hereafter “Voluntary Principles”) with Five Eyes governments, launched in March 2020. Google provided extensive input into the drafting and development of the Voluntary Principles and was among the first companies to endorse them. We continue to work with NCA CEOP and the Home Office to understand the evolving risk landscape, and respond accordingly. Google welcomes this work, as well as the Home Office’s global leadership on tackling CSAM.
12. Child safety NGOs have also been swift to act. The UK Safer Internet Centre, a partnership of the Internet Watch Foundation, Childnet International and South West Grid for Learning (SWGfL), has coordinated a series of cross-industry and hotline meetings to share information on the impact of COVID-19 on child safety. Google is a leading supporter of each of these organisations and is highly engaged in this network, sharing insights and best practices as well as providing financial support.
13. Google has taken a range of concrete actions in response to shared concerns about increased risk to children as a result of COVID-19. With the group of companies that led the creation of the Voluntary Principles we created the “Stay Safe at Home. Stay Safe Online.” campaign, in association with End Violence Against Children (EVAC). Launched on 17th April, this campaign published bespoke video content to educate both parents and children about potential risks during this time, and how to respond. These video ads ran online across all the Five Eyes countries, including the UK.
14. In addition to the campaign videos, Google (and the other companies) worked with EVAC to create an online one-stop-shop of our safety resources on the EVAC website. This includes clear signposts to Google resources to help both parents and children with digital safety, including:
○ Detailed information on how to report concerning behaviour to Google.
○ Our Be Internet Legends resources which include a family guide, parent’s guide and children-specific activities.
○ Family Link, our safety-by-design app which allows parents to set boundaries for children on their devices. Parents receive detailed information about, and can set limits on, their child’s screen time; can control what apps are downloaded onto their devices; set time limits on how long the device itself can be used; and use location settings to know where their children are.
○ YouTube Kids, the specially designed YouTube app that creates a contained environment ensuring a safe experience for children.
○ Google Safety Centre, where you can find detailed information and step-by-step guides on privacy and security settings.
Google is promoting the campaign across a range of our own platforms, such as embedding the videos in YouTube Kids, and including the advice in a YouTube Kids email to parents. As children are required to homeschool, we have also placed the EVAC resources on our Teach from Home hub.
15. With heightened concerns about the risks under lockdown, we have also enhanced our deterrence messaging regarding CSAM and support for user reporting. This has previously been done by using targeted advertising at the top of search results, triggered by a list of CSAM-seeking terms. Rather than rely on ads, we have launched a more prominent, UK-specific Google OneBox (see image below) that will surface key reporting and deterrence information. Similar OneBoxes are already in use for COVID-19 search terms, providing users with clear and authoritative NHS information on symptoms.
New deterrence OneBox linking to IWF Reporting Hotline and microsite
16. In the UK, the OneBox provides an in-depth search result at the top of the results page with an explicit warning that CSAM is illegal. Critically, it will invite users to report this type of material to the IWF directly.
17. The OneBox also provides a link to Google’s dedicated UK microsite with further information about how to contact UK law enforcement to report immediate concerns, as well as support resources for survivors and those at-risk of offending. These include links to: NCA CEOP, the Government’s own COVID-19 child safety advice; NSPCC Childline; and StopItNow, the UK’s leading charity in supporting potential and existing offenders.
18. The “Stay Safe At Home. Stay Safe Online.” campaign, specialised OneBox and dedicated microsite have been developed with unprecedented urgency amid the concerns about the heightened risk as a result of COVID-19 measures. They underline Google’s commitment in the Voluntary Principles to “provide the user with details of how to report illegal material and, when appropriate and where available, information on interventions for those who are at risk of offending (for example, providing links to support services) is also critical.”
19. These actions build on Google’s ongoing work to tackle CSEA. We continue to use a combination of human review and automated, AI-powered technologies to identify and remove CSAM. This includes well-established image hashing technology, as well as our own CSAI Match, a first-of-its-kind advanced video fingerprinting and matching technology which detects CSAM in uploaded video files (technologically much more challenging than identifying still images), and is uniquely resistant to image manipulation. Another notable piece of technology we have developed is called Content Safety API. This uses AI to identify potentially illegal content that has not been seen before, helping reviewers find and report content seven times faster. We offer this service for free to NGOs and private companies to support their work protecting children. More information on all our tools to tackle CSAM, as well as partnerships, is available on our Protecting Children site.
20. These technologies have been critical to our safety teams successfully adapting to the challenging working conditions resulting from lockdown measures. As we set out in a blog post in mid-March, we have taken the utmost care to balance the safety of our workforce and people who use our platforms and services. While workplace restrictions are in place, we are using greater automation to remove some content with less human review, allowing us to ensure safe working conditions while continuing to protect our ecosystem.
21. Removing illegal content has remained our utmost priority. We have communicated these measures to government, law enforcement and NGO partners and many of them have taken similar measures. We continue to be guided by the best evidence shared by our range of partners and adapt our responses accordingly.
22. The Revenge Porn Hotline, operated by SWGfL, has reported an increase in calls to the Hotline during this time. Google has worked with the Revenge Porn Hotline since its creation in 2015 and we have specific policies against unwanted explicit images (“Revenge Porn”). These images are prohibited by our policies, and individuals can directly request that unwanted and explicit images of themselves are removed.
Countering Terrorism, Violent Extremism and Hate Speech
23. There are similar concerns about terrorist, far-right and hate groups seeking to exploit the coronavirus crisis to their own ends. Google is in frequent contact with the Home Office and relevant UK bodies to understand and discuss the threat level. Google considers violent extremism to be among the most egregious types of content. We have robust policies that prohibit content that promotes terrorism, such as that which glorifies terrorist acts or incites violence. Where content violates our policies we remove it from our platforms. Where appropriate, we also notify law enforcement agencies. We will take actions according to our policies, as well as under our commitments in the Christchurch Call to Eliminate Terrorist & Violent Extremist Content Online and through the Global Internet Forum for Countering Terrorism (GIFCT).
24. Over the last year, YouTube strengthened its hate and harassment policies, to make those rules even more precise and clear. We also recognise that COVID-19-related hate speech may be directed at protected or minority groups. As such, we have enhanced YouTube’s Community Guidelines to ensure that any content that leverages the crisis and harmfully targets a group based on the attributes listed in our Guidelines—including age, race, ethnicity, immigration status, or religion—violates our policies and will be removed. This includes content that encourages the spread of COVID-19 to any group of people based on the attributes listed in our Community Guidelines. Similarly, content that targets individuals based on those attributes above would be reviewed under our harassment policy.
25. As with our extensive efforts against CSAM, we have adapted our procedures to ensure we can continue to prioritise this content. Our industry-leading tools are similarly casting a wider net to identify and remove violent extremist content and hate speech. In line with our precautionary approach, we expect a greater number of false positives as we temporarily increase automation.
26. To ensure we are able to be as effective as possible, we continue to prioritise referrals that come via our Trusted Flaggers. This is a network of specialist individuals, government agencies and NGOs that are given additional tools to flag content directly to our safety teams, receiving the highest priority. We also continue to prioritise any content flagged by the Counter Terrorism Internet Referral Unit (CTIRU).
27. From mid-March to mid-April, we have not observed notable increases in hate speech. We are actively monitoring trends around COVID-19 content to ensure that our policies and enforcement evolve as the content does.
28. Similarly, our Trust & Safety teams have not observed an increase in attempts by terrorist groups to use YouTube to promote their propaganda. Despite reports of jihadist actors attempting to use the crisis to support their propaganda and recruitment efforts, we have not observed any increase in related content on YouTube. Our Trust & Safety teams continue to monitor closely for any indication of YouTube being used for these purposes.
29. We retain the utmost vigilance and will continue to engage closely with government, law enforcement, and through the GIFCT, to ensure we act with urgency to identify and remove this content.
Misinformation and COVID-19
30. Since the beginning of the year, search interest in COVID-19 has grown around the world. We know that receiving trustworthy information at the right time is critical to saving lives and tackling this pandemic.
31. To help people find the information they need across our products, we are partnering with the NHS and Government to bring our users authoritative information in a rapidly changing environment. This work includes:
● Raising NHS and gov.uk information on Google Search through the use of new OneBoxes and SoS Alerts. This new format organises the search results page to help people easily navigate trusted resources.
● Promoting Government and NHS guidance on the Google Homepage. These efforts include the Government’s "Stay Home" campaign, doodles and messaging on our homepage which went live in record time after lockdown measures were put in place.
● On YouTube, elevating NHS and other authoritative sources so users get the latest COVID-19 information. New features include: a COVID-19 news shelf on our homepage that features stories from the NHS, WHO and trusted news sources; health information panels in search results that feature NHS information on COVID-19 symptoms, prevention, and treatment; and links to the NHS on the watch pages of COVID-19 related videos. These information panels have received over 20BN impressions worldwide and, in the UK, we have seen consumption of authoritative news sources increase six and a half times as a result of our efforts.
● A new and dedicated COVID-19 website providing NHS and gov.uk resources.
32. We take active steps to detect and remove COVID-19 related misinformation that contradicts guidance from health authorities and may result in real-world harm. Specifically, we have:
● Adapted YouTube Community Guidelines for the COVID-19 crisis. Our policies prohibit content that explicitly disputes the efficacy of NHS or WHO advice, or that may lead people to act against that guidance and lead to real world harm. We enforce these policies diligently and continue to reduce recommendations of borderline content that could prove harmful to users. This builds on existing policies around medical misinformation and risk of harm. We have removed thousands of misinformation and disinformation videos, and continue to enforce robustly.
● On Google Ads, our policies do not allow ads that potentially capitalise on or lack reasonable sensitivity towards a sensitive event, such as a public health emergency. We are making exceptions for COVID-19-related ads from government organisations, healthcare providers, non-governmental organisations, intergovernmental organisations, verified election ads advertisers and managed private sector accounts with a history of policy compliance who want to get relevant information out to the public. Ads that are allowed still have to abide by our policies, which also disallow the promotion of harmful medical or health claims and practices. In addition, we are enforcing a temporary restriction on personal protective equipment and we are taking additional steps to prevent artificially inflated prices that limit or prohibit access to other essential items on our network.
33. There is no silver bullet when it comes to tackling COVID-19 misinformation. Partnership with scientists, journalists, public figures, technology platforms and others remains critical. To this end, our efforts include:
● $250M in ad grants to global government agencies to provide critical information on how to prevent the spread of COVID-19 and other relief measures to local communities.
● Supporting coronavirus fact-checking and verification efforts through more than $6.5 million in funding from the Google News Initiative to fact-checkers and nonprofits fighting misinformation around the world, with an immediate focus on COVID-19. In addition, we’re working to increase access to data, scientific expertise and fact checks through support for collaborative databases and providing expert insights to fact-checkers, reporters and health authorities, such as COVID-19 Google Trends down to the city level.
● Helping journalists and news publishers through COVID-19 challenges. A Journalism Emergency Relief Fund is providing grants to hundreds of small, medium and local news organisations across the UK. Large publishers are being given fee holidays on Google platforms to reduce their business costs and focus on journalism.
COVID-19 Scams and Fraud
34. We have seen a significant rise in attempted scams related to COVID-19 and are working across our products to protect our users from these threats, and to support the broader security ecosystem. Our actions to date include:
● Findings on COVID-19 and Online Security Threats: Google’s Threat Analysis Group (TAG) currently detects 18 million malware and phishing Gmail messages each day related to COVID-19. We are blocking more than 99.9% of these emails from ever reaching the inboxes of our users. While we are not seeing an uptick in the volume of overall malware and phishing emails, we are seeing a change in tactics as bad actors seek to exploit the current pandemic.
● Informing users on how they can protect themselves against COVID-19 scams: We recently published a blog post that highlights the measures we are taking to protect users from COVID-19 scams and the tips they should follow when it comes to evaluating content online. COVID-19 is increasingly a common lure for phishing and malware attacks, and our blog post provides actionable advice to help users identify and thwart these attacks. This includes a link to a new, dedicated microsite to help users identify and protect themselves from COVID-19-related scams. To date, the site has had 167,000 unique visitors, with over 23,000 hours of total advice read.
● COVID-19 Phishing and Malware Targeted at Business and Organisations: Google Cloud recently published examples of COVID-19-related phishing and malware threats we are blocking in Gmail, sharing steps for admins to effectively deal with these threats, and details best practices for users to avoid threats.
● Vulnerability Rewards Program (VRP): We have long enjoyed a close relationship with the security research community. To honor all the cutting-edge external contributions that help us keep our users safe, we maintain a Vulnerability Reward Program for Google-owned web properties, running continuously since November 2010. Every VRP Bug Hunter who submitted at least two remunerated reports from 2018 to April 2020 will be eligible for a $1,337 research grant. We are doing so in recognition of the challenges COVID-19 has created for the research community, and we hope these grants will support Bug Hunters during these uncertain times. The grants themselves may be used to support security and anti-abuse research related to COVID-19, but if Bug Hunters do not want these grants, we will offer the option to donate the grant, and at our discretion, we may match that donation.
35. Google has responded to the COVID-19 crisis with the utmost urgency, redeploying resources to support governments, businesses, schools and users as the pandemic unfolds. We have created new products and new processes at unprecedented speed to ensure our platforms are safe. We continue to use our unique insights to support the Government and NHS to keep citizens informed through the crisis, and will adapt and evolve our response to meet the UK’s needs.
 Google Transparency Report
 Gov.uk (2020) Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse
 EVAC Stay Safe At Home. Stay Safe Online. The videos can be viewed on YouTube here: https://www.youtube.com/watch?v=puxGG29jGkc&list=PLVU1HVeE8_YAhoFqIV2iT08vlRtreZ2cb&index=8
 See Google For Families Help
 See Be Internet Legends Digital Safety Resources
 See Google Family Link
 See YouTube Kids
 See Google Safety Centre
 See Google Teach from Home
 See Protecting Children From Abuse
 See Google Fighting Child Sexual Abuse Online
 See Actions to reduce the need for people to come into our offices and Protecting Our Extended Workforce And The Community, published 16/03/2020
 Remove unwanted & explicit personal images from Google
 See Christchurch Call to Eliminate Terrorist & Violent Extremist Content Online for more information
 See GIFCT website for more information
 See YouTube Policies and Safety
 See Google COVID‑19 Information & Resources
 See YouTube The Four Rs of Responsibility, Part 2: Raising authoritative content and reducing borderline content and harmful misinformation
 See Google Ads Help Coronavirus disease (COVID-19) Google Ads policy updates
 See COVID-19: $800+ million to support small businesses and crisis response
 See Google News Initiative COVID-19: $6.5 million to help fight coronavirus misinformation
 See Google Coronavirus Search Trends
 See Google News Initiative Journalism Emergency Relief Fund
 See Google Findings on COVID-19 and online security threats
 See Google Helping you avoid COVID-19 online scams
 See Google Helping you avoid COVID‑19 online scams
 See Google Protecting businesses against cyber threats during COVID-19 and beyond
 See Google Research grants to support Google VRP bug hunters during COVID-19