Googlewritten evidence (FEO0047)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

Executive Summary

 

1.              Google appreciates the opportunity to provide comments on the House of Lords Communications and Digital Committee’s inquiry into freedom of expression online. The UK is at a pivotal moment in determining its approach to regulating the internet, and we welcome the opportunity to participate in the ongoing dialogue over critically important issues such as the interplay between freedom of expression and efforts to combat harmful content online.

 

2.              At Google, our mission is to organise the world’s information and make it universally accessible and useful. While we believe the internet has an immensely positive impact on society, we recognize that the open nature of the internet is sometimes exploited by bad actors who can harm others. We take these threats seriously and believe that online platform operators, government, and civil society have a shared responsibility to work together to address these risks while also protecting freedom of expression and other societal interests.

 

3.              Google is supportive of carefully crafted and appropriately tailored regulation that addresses problematic content. We have contributed constructively to discussions surrounding the UK Online Harms White Paper. We have raised the importance of having clear definitions for illegal content and agree that platforms should have efficient processes to remove such content when they are notified of it. The UK should also introduce Good Samaritan protections to provide legal basis for innovation in content moderation.

 

4.              However, we have concerns about efforts to regulate the category of lawful-but-harmful content. The boundaries of this category are often subjective and context dependent, which makes it difficult to establish a clear definition that would be appropriate across different services and contexts. Therefore, a strict legal obligation to remove or restrict this inherently ambiguous category of content would create legal uncertainty that may lead many platforms to significantly restrict the availability of user generated content and thereby undermine freedom of expression interests.

 

5.              As an alternative to broad rules for the ambiguous category of lawful-but-harmful content, the government should consider an approach that focuses on transparency and accountability. For example, Ofcom requirements under the new Online Harms regime could require in-scope platforms to include: (i) clear and accessible acceptable use policies appropriate for the specific service; (ii) clear and accessible processes for reporting service misuse; and (iii) effective processes for dealing with such reports. Platforms could also be evaluated for the consistent enforcement of their stated policies and subject to reasonable penalties for systemic failures to comply with such policies. Such an approach would push platforms to develop thoughtful content moderation policies that are appropriate to their size, nature, purpose, and audience, and to develop effective tools for enforcing those policies.

 

6.              Our submission further details our views on how the government should approach the complex challenge of combating online harms while preserving freedom of expression. We thank you for your continued engagement on this issue and look forward to further discussion.

 

1.                Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

1.1              The internet has played a central role in the exchange of information and ideas. Its ability to serve as a free and open platform for global communication is therefore important to protect. Threats to freedom of expression online are often varied in their nature. For example, bad actors may post illegal or harmful content that undermines the credibility of platforms and discourages or intimidates other speakers, or they may submit requests to remove content in bad faith. This is why we invest heavily in technology, teams and policies that help us move quickly to remove content that is illegal or against our content policies. Other threats can stem from well-intentioned efforts to combat bad actors online that may have the inadvertent effect of restricting freedom of expression - this can come from either platforms accidentally removing legitimate speech or from overly broad regulations that will pressure companies to drastically limit the amount of user content available on their online platforms. To balance this risk, YouTube has robust appeals mechanisms that allows content creators to request a new review for the content removal decision, and allows them to cite exceptions for educational, documentary, scientific and artistic content (more about the exceptions[1]).

 

1.2              Sometimes these well intentioned efforts to address content challenges can have an unequal impact. For example, proposals to end anonymity online may have a great impact on the freedom of expression of certain minority groups that disproportionately rely on the internet as a medium of communication. For groups or individuals persecuted or marginalised in their local communities due to characteristics such as race/ethnicity, religion, ideology, sexual orientation, or gender identity, the internet can serve as a lifeline that allows them to express their views and build supportive communities. Therefore, efforts to regulate online content should be evaluated for their impact across different communities.

 

1.3              As a general rule, we believe that the boundaries between legal and illegal speech should be the same online and offline.[2] If a person is legally permitted to express a view offline, they should not be categorically prohibited from expressing that same view online. However, that does not mean that all online platforms need to welcome all legal speech, particularly speech that is potentially harmful. As we discuss later in this comment, platforms should be encouraged to develop thoughtful content moderation policies that are appropriate to their size, nature, purpose, and audience. Transparent content moderation policies and consistent enforcement of such policies by online platforms can do much to help mitigate online harms while still preserving the freedom of expression that is fundamentally necessary for open democratic societies.

 

2.                How should good digital citizenship be promoted? How can education help?

 

2.1              Digital literacy and skills education can play a critical role in mitigating the risks of online harms and maintaining a healthy and open internet community. While content moderation systems are crucial, it is also important that government, industry and civil society nurture conversations and educational programmes focused on the rights and responsibilities online. Media literacy and digital citizenship should help people make the most of online opportunities while empowering them to manage content and communications, and protect themselves and their families from the potential risks. For example, our two PSHE-accredited programmes, Be Internet Legends and Be Internet Citizens, aim to equip young people with media literacy and digital citizenship skills so they can experience the internet in a safe and positive way.

 

2.2              In 2017, Google and YouTube launched Be Internet Citizens (BIC) in partnership with the Institute for Strategic Dialogue (ISD), delivering in-school workshops and practitioner trainings across the UK. The programme empowers young people to combat online harms, enabling them to become accountable and conscientious digital leaders. In just three years the programme has reached an estimated 55,000 teenagers and 650 educators across England, Scotland and Wales. Additionally, over 1000 teachers and youth workers have downloaded BIC resources for free, including a five-module curriculum with activity plans and facilitator guidance. Based on feedback, we know that 84% of young people having been through the programme feel confident they would know what to do if they encountered hate speech online.

 

2.3              Be Internet Legends is Google and Parent Zone’s online safety programme, supporting children, families and primary schools across the country. The only PSHE-accredited online safety programme for 7-11 year olds in the UK, Be Internet Legends has reached over 71% of primary schools in the UK, trained more than 300,000 children face to face and supported over two million families with free resources since launching in March 2018. Since the COVID-19 outbreak, we’ve delivered hundreds of free virtual Be Internet Legends assemblies at primary schools across the UK, which have been hugely popular with children, teachers and parents taking part in school and at home

 

2.4              Google.org, our philanthropic arm, has also invested steadily in media literacy. As part of a $10m commitment to supporting media literacy initiatives around the world, Google.org has granted funds to UK projects including: 1) Newswise[3] The Guardian Foundation’s media literacy programme for primary school children around the UK; and 2) The Student View[4] an NGO setting up newsrooms in senior schools for pupils from disadvantaged backgrounds, teaching them how to be critical media consumers by learning about journalism.

 

2.5              The government should promote a wide-ranging approach to education on digital citizenship (including literacy and civility) - working in partnership with industry. All citizens should be educated around risks online, ensuring they are much better informed about the privacy and safety tools at their disposal and able to make their own choices. Internet companies have launched a range of digital civility and digital literacy programmes in the past five years and these have an important role to play. But public institutions also have a key role to play in establishing norms, for example by providing online citizenship education through PSHE lessons, and working with DfE and Ofsted to ensure that online safety lessons are prioritised in schools.

 

3.                Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?

 

3.1              Today, online user-generated content is covered by a myriad of different laws and we regularly remove user-generated content for legal reasons. However, we note that many of these laws precede the digital age and we welcome efforts, such as those by the Law Commission regarding online communications, to ensure that the law is clear and effective for tackling online harm.

 

              For online platforms, the UK’s Electronic Commerce (EC Directive) Regulations 2002 provides a clear and solid foundation for their intermediary responsibility. This framework and its protections have helped to protect the free flow of information online and given consumers, citizens, institutions and businesses more choice, power and opportunity.

3.2              We do not believe that the Online Harms framework should blur the important distinction between illegal and lawful-but-harmful content. Free expression is a vital component of free and open societies, and the European Court of Human Rights has confirmed that freedom of expression includes the right to “offend, shock or disturb.” If a government believes that a category of content is sufficiently harmful, the government may make that content illegal directly, through transparent, democratic processes, in a clear and proportionate manner. However, governments should avoid restricting the ambiguous[5] category of lawful-but-harmful content, as such restrictions would chill free expression and create significant legal risks for online users and service providers.

 

3.3              Although we believe that regulations of UGC should remain focused on and limited to clearly defined categories of illegal content, governments and platforms can take certain steps to mitigate the concerns posed by lawful-but-harmful content. As we discuss later in this comment, one approach for combating those risks would be to require online platforms to post clearly defined content policies and monitor the platforms’ enforcement of such policies. Transparent policies and consistent enforcement of rules can help limit online harms while preserving the fundamental openness of the internet.

 

4.                Should online platforms be under a legal duty to protect freedom of expression?

 

4.1              As conduits for the exchange of information and ideas, online platforms have an important role to play in protecting freedom of expression. We agree that as the UK Government moves forward with the Online Harms regulations, it has to protect free expression online. To best achieve this, the Government should ensure that Ofcom, the future online harms regulator, has a balanced set of duties that include both protecting users from illegal or demonstrably harmful content, and protecting freedom of expression. It is also crucial that the regulatory framework and the regulator provide legal clarity for platforms; focus on systemic approaches to the relevant issues; and focus on transparency and best practice (we detail these points in the next section).

 

4.2              Another important step to protecting free speech is ensuring the right legal framing of the ‘duty of care’ proposed for platforms. The well-established legal concept of a “duty of care” has developed incrementally over hundreds of years, and is largely associated with establishing liability in negligence. Therefore, there is a significant risk that the use of that term in this context will create the misconception that platforms face tortious liability and that users may avail themselves of remedies for breach of the duty “owed” to them and not the regulator, as intended by Government. As we discuss elsewhere in this document, any legal regulations for content should be precise and be limited to addressing clearly illegal content and platforms’ enforcement of transparent content moderation policies. Otherwise there is the risk that uncertainty deriving from legal definitions would likely harm freedom of expression interests, as platforms may be incentivised to significantly limit the content that they permit in order to limit their exposure to legal liability.

 

5.                What model of legal liability for content is most appropriate for online platforms?

 

5.1              Any system for holding online platforms legally liable for content available on their platforms should be limited to clearly defined standards for illegal content, be knowledge-based, and recognize the relevant differences between services.

 

5.2              We believe the UK’s Electronic Commerce (EC Directive) Regulations 2002 continue to provide this clarity. It sets out the conditions under which information society services of different types can be liable for third-party content. For example, Article 19 states that a hosting service provider is not liable for UGC where it does not have actual knowledge of unlawful activity, or upon obtaining such knowledge, it acts expeditiously to remove or disable access to the information.

 

5.3              A knowledge requirement is necessary to protect freedom of expression. If online platforms are held liable for illegal content they are unaware of, they will likely be pressured to limit the content available on their platform by pre-vetting content before it is posted and/or systematically monitoring and taking down content on an ongoing basis. Both pre-vetting and ongoing monitoring would lean heavily on automated systems, as fully human review-based approaches would be infeasible for all but the smallest platforms. Automated systems are not perfect and can inadvertently discriminate against certain types of content or authors. Additionally, out of fear of legal liability, many platforms would likely calibrate their pre-vetting and ongoing monitoring systems to lean towards blocking/removal in borderline cases. Therefore, the widespread implementation of such systems could result in a drastic decline in the amount of valuable content that is available on platforms.

 

5.4              Although platform operators should promptly remove material that meets clearly defined standards for illegal content when they are made aware of it, the law should not impose unreasonably rigid timelines for such removals. For example, we regularly receive overly broad removal requests,[6] and analyses of cease-and-desist and takedown letters have found that many seek to remove potentially legitimate or protected speech. Such requirements, especially when backed by a harsh penalties regime, creates a risk that platforms will overblock legitimate speech. Users will see their lawful content removed, and potentially lose their access to services, because platforms will have no choice but to take a sweeping approach in order to meet the scale of the challenge. The standard contained in the UK’s Electronic Commerce (EC Directive) Regulations 2002, to “act expeditiously” helps avoid these risks, striking the appropriate balance between user safety and free expression.

 

5.5              Additionally, regulations that impose liability on platforms for illegal content should include Good Samaritan protections that incentivise platforms to proactively seek out and remove such content. These already exist in countries with a thriving digital economy and have recently been proposed by the European Commission as part of their Digital Services Act package. Without such protections, a platform that makes a good-faith decision to not remove content that has been flagged as being potentially illegal could face legal liability for failing to remove that content if a court or regulator later determines that such content is illegal. This risk creates an incentive for companies to either refrain from taking reasonable proactive moderation measures, or to over-remove valuable content in the course of moderating. Good Samaritan protections would give protection for platforms to seek out and remove harmful content without increasing their risk of legal liability, and thereby reward companies for doing the right thing and removing illegal content, unprompted and at scale.

 

5.6              While we believe that the law should remain focused on clearly defined standards for illegal content, we understand that there is a need to respond to legitimate concerns about the risks posed by lawful-but-harmful content. To the extent that new regulations are developed to address such content, we believe they should focus on platforms’ transparency regarding their content moderation policies and their adherence to their stated policies. Additionally, liability for failure to adhere to stated content policies should focus on a platform’s “systemic failure” to comply with its obligations as opposed to failure to remove individual pieces of allegedly harmful content. The standard for “systemic failure” should be clearly defined and take into account the scale and complexity at which platforms operate, their overall success rate at addressing problematic content, the risks to legitimate speech from precipitous action, and the need to take the time to orient to and understand novel issues as they arise. Regulators’ primary means of identifying systemic failures should be transparency reports produced by platforms as well as evidence based research conducted by the regulator.

 

5.7              Lastly, it is critical that any regulations for online content account for the important differences between platforms. For example, web search engines play a uniquely important role in facilitating access to the internet, enabling people to access, impart, and disseminate lawful information. Placing restrictions on the types of content that can be accessed through web search engines would run counter to the ideals of an open and democratic society by interfering people’s ability to access and hear different views, as well as share their own. Therefore, web search engines’ legal liability with respect to content should remain narrowly focused on compliance with valid requests for the restriction of clearly illegal content and transparency about their content policies. We also note the Government is intending that private communications should be subject to new regulation. We strongly believe that the right to privacy for users on private channels should be respected, and agree that monitoring requirements would be inappropriate (and technologically unfeasible in most cases).

 

6.                To what extent should users be allowed anonymity online?

 

6.1              We believe users should be allowed anonymity online. In addition to being a challenge to enforce, a real identity requirement would chill free expression, endanger vulnerable groups, and increase risks to privacy.

 

6.2              The internet can serve as a powerful organsing and community building tool for members of minority groups that may face persecution in their local community for reasons such such as their religion, sexual orientation, or gender identity. The obligation to use one’s real name online would likely chill participation in such communities and potentially expose members of such groups to an increased risk of harm.

 

6.3              Additionally, online support forums have become a lifeline for many seeking specialist or peer to peer support on issues ranging from domestic violence to mental health. In some cases, anonymity is vital to protect the privacy and safety of persons seeking such support.

 

6.4              It is also important to reiterate that we enforce our community guidelines and remove infringing content irrespective of whether the user’s identity is known. We also believe that the current legal framework allows for cooperation between online platforms and law enforcements looking to investigate online crimes. Platforms like ours already work through legal procedures including the CLOUD Act (Clarifying Lawful Overseas Use of Data Act) which enable more robust and faster cooperation with law enforcement.

 

6.5              Lastly, it is not clear how an online real identity mandate would be enforced. The internet can be accessed via a wide range of devices and numerous tools exist to help people alter or obfuscate their identity. The effective implementation of a real online identity mandate would require the enactment and enforcement of far reaching laws that would place the UK drastically out of step with other democratic countries that allow open access to the Internet.

 

7.                How can technology be used to help protect the freedom of expression?

 

7.1              Automated technology can play an important role in protecting freedom of expression by helping platforms identify and manage illegal content and lawful-but-harmful content that might otherwise deter the open and safe exchange of ideas on online platforms. Platforms should be encouraged to use such tools where feasible. However, they must also be aware of and take steps to mitigate the risks associated with using automated technology.

 

7.2              We use machine learning (ML) to train automated scanning tools that help us detect illegal child sexual abuse material (CSAM). This abhorrent content has no place in our services and we take a number of voluntary proactive steps to detect and remove it from our services. While automated scanning tools are a critical tool in our fight against illegal content, it is important to note that these tools are not perfect and that the broad deployment of automated tools to scan and manage content carries risks.

 

7.3              We observed[7] this firsthand when reckoning with greatly reduced human review capacity due to COVID-19. We were forced to make a choice between potential under-enforcement or potential over-enforcement. Because responsibility is our top priority, we chose to use technology to help with some of the work normally done by reviewers. The result was an increase in the number of videos removed from YouTube; we doubled the number of videos removed in the second quarter of 2020 compared to the first. We prepared for more appeals and dedicated extra resources to make sure they were quickly reviewed. Though the number of appeals remains a small fraction of total removals — less than 3% of video removals — we saw both the number of appeals and the reinstatement rate double in this period.

 

7.4              Algorithms are not infallible, and there is extensive research highlighting the ways in which artificial intelligence could entrench existing inequalities around race, gender and sexuality, among other areas. We seek to mitigate this sort of risk by, among other things, regularly reviewing our machine learning systems to reduce the risk of unintended algorithmic bias and adding a level of skilled human review before removing content in some circumstances. We have developed new tools and techniques to test our machine learning systems for unintended bias, including a What-If Tool that empowers developers to visualise biases, Fairness Indicators to check ML model performance against defined fairness metrics, and an ML Fairness Gym for building model simulations that explore the potential long-run impacts of ML-based decision systems in social environments. We correct mistakes when we find them and retrain the systems to be more accurate in the future.

 

7.5              Due to the vast and ever-increasing amount of data that is available online, it will be necessary to use automated tools to help manage content on online platforms. Government regulations for online platforms should enable innovative uses of such technologies, but not mandate them, encouraging platforms to mitigate the risks associated with these evolving technologies.

 

8.                How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?

 

8.1              We believe that transparency and empowering users must be central to any effective approach to addressing the spread of harmful content while also protecting freedom of expression. However, it is important to keep in mind that the appropriate design choices will vary depending on the nature, purpose, and audience of a platform, among other factors. As a result, platform operators, and not the government, should be responsible for making the design choices that are best suited to mitigating the risk of online harm on their platforms.

 

8.2              For example, on YouTube, we have worked to develop thoughtful policies that allow us to restrict harmful content while also respecting diverse and marginalised voices. We empower users to flag problematic content and have developed a range of responses to such challenges, including removing violative content, raising up authoritative content, reducing the spread of borderline content, and rewarding trusted creators. On YouTube for Kids, we have implemented a different set of design choices to account for that platform’s different purpose and audience. That platform provides a restricted version of YouTube for families with appropriate content for kids, built-in timers for use, no public comments, and a parent approved content mode.

 

8.3              In addition to developing approaches for our own platforms to combat harmful content, we are also engaged in ongoing and evolving efforts to advance cross-industry transparency through self- and co-regulatory initiatives, including the Code of Practice on Disinformation and the Code of Conduct on Illegal Hate Speech.

 

9.                How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?

 

9.1              We believe that online platforms should strive to be transparent about their use of algorithms to manage content. However, the benefits of transparency must be balanced with the need to ensure that bad actors do not game a platform’s systems through manipulation, spam, fraud and other forms of abuse.

 

9.2              Google has taken a number of steps to be transparent with our users. For example, our How Search Works site provides extensive information about how we improve search quality and our approach to algorithmic ranking, including publication of our Search Quality Rater Guidelines which define our goals for Search algorithms. We also work hard to inform website owners in advance of significant, actionable changes to our Search algorithms and provide extensive tools and tips to empower webmasters to manage their Search presence - including interactive websites, videos, starter guides, frequent blog posts, users forums and live expert support. We also launched a How YouTube Works site that allows users to learn about the recommendation systems that help users discover new content. We have also given YouTube users more control over content they’re recommended. They can remove suggestions from channels users don’t want to watch and learn more about why a video was suggested.

 

9.3              Although transparency should be a core value for online platforms, it must be balanced against other concerns. For example, algorithmic transparency rules that require the disclosure of raw code and data would raise a number of risks, including the possible disclosure of commercially sensitive information and undermining of efforts to keep users safe and protect the integrity of platforms. Exposing the code of algorithms, even if just to a small group in a controlled setting, magnifies security risks, such as hacking and fraud, through gaming the system. Therefore, there is a real risk that poorly designed transparency rules could end up harming consumers and citizens more than helping them, making systems less safe and harder to protect.

 

9.4              While we believe that online platform operators should strive to be transparent with users about their use of algorithms to manage content, any legal requirements for algorithmic transparency should go through consultation with companies and experts, to ensure that such measures are effective, lawful, respectful of privacy, and do not compromise commercially sensitive information or risk opening up algorithms for abuse.

 

10.           How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?

 

10.1              We strive to make platform rules and procedures as transparent as possible and to offer users opportunities to appeal decisions that are made regarding their content. On YouTube, we have a process for creators to clarify enforcement actions. For example, if a creator chooses to submit an appeal about a video removed under our Community Guidelines, it goes to human review, and the creator receives a follow up email with the reviewer’s decision. We also publish the YouTube Community Guidelines Enforcement Report to show the progress we are making in removing violative content from our platforms. The report includes public aggregate data about the flags we receive and the actions we take to remove videos and comments that violate our content policies and community guidelines, including data that is broken down by country.

 

10.2              To preserve freedom of expression, regulators should not seek to set specific requirements for online platforms’ moderation of lawful content. Given the differences between platforms (e.g., in scale, purpose, and audience), it would be infeasible for the government to set broadly applicable rules about the moderation of specific types of content. To the extent that the government believes it is necessary to implement regulations for lawful content moderation, such regulations should focus on platforms’ transparency regarding their content moderation policies and their adherence to their stated content policies. For example, a code could require in-scope platforms to include: (i) clear and accessible acceptable use policies appropriate for the specific service; (ii) clear and accessible processes for reporting service misuse; and (iii) effective processes for dealing with such reports. This approach has a proven track record. We note that this is similar to the approach taken by the UK Government in its Social Media Code of Practice, which distinguished core expectations from best practice guidance. We also note that in the course of producing that Code, the Government received feedback that a high-level approach is more likely to be effective than requirements that are overly-detailed and prescriptive.

 

11.           To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?

 

11.1              We believe that a regulatory environment that enables platforms to grow and compete in the open market is indeed very important to protecting free expression online as it allows users to navigate between different services and actively choose those that ensure the best functionality and safety protections. As the UK Government is looking to introduce new regulations regarding both online competition and content moderation, it is crucial that these align with a coherent digital strategy that aims to protect online citizenship and enhance it.

 

11.2              Content regulation, such as the Online Harms proposals, need to ensure that any new requirements do not negatively impact competition and the ability of new platforms to emerge and grow quickly. Research from Coadec found that “86% of UK investors say that regulation aiming to tackle big tech could lead to poor outcomes that damage tech startups and limit competition — these plans risk being a confusing minefield that will have a disproportionate impact on competitors and benefit big companies with the resources to comply.” This is why it is important that any new Online Harms requirements are based on a proportional and risk-based approach. As explained earlier in the document, platforms that do not host content need to have different types of requirements, while services with a low level of risk (for example online maps that host reviews) should not be burdened by the requirements as more risky types of services.

 

11.3              DCMS’s upcoming Digital Strategy is an opportunity to ensure a coherent approach to digital regulation and we look forward to seeing the Government's proposals in this space.

 

12.           Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?

 

12.1              The open nature of the internet makes online content regulation an inherently global issue. While the approaches to this issue will need to account for legal and cultural differences amongst different countries, we believe that governments and online platforms can learn much from collaborating with each other on this issue.

 

12.2              Currently, Google participates in a number of international collaborative efforts to address the complex issues of regulating online content and protecting freedom of expression. For example, we are subject to independent assessments by the Global Network Initiative (GNI). Companies participating in the GNI are independently assessed periodically on their progress in implementing the GNI Principles, which are rooted in the rule of law and internationally recognised laws and standards for human rights. Independent assessments are conducted by assessors accredited by the GNI Board as meeting independence and competency criteria. In the latest assessment period — the GNI’s third assessment of Google — the GNI Board determined that our company is making good-faith efforts to implement the GNI Principles on Freedom of Expression and Privacy with improvement over time.

 

12.3              Google is also a founding member of the Global Internet Forum to Counter Terrorism (GIFCT), a multi-stakeholder initiative developed by the tech industry in collaboration with governments and non-governmental organisations. The GIFCT was established in June 2017 to curb the spread of terrorist content online by substantially disrupting terrorists’ ability to promote terrorism and exploit or glorify real-world acts of violence using online platforms. Building on the work started within the EU Internet Forum and the shared industry hash database, the GIFCT is fostering collaboration with smaller tech companies, civil society groups and academics, and governments.

 

12.4              Additionally, YouTube was one of the original signatories of the EU Hate Speech Code of Conduct. We also provide training for our staff and for NGOs that participate in the Code. These NGOs are also enrolled as part of our Trusted Flagger program. YouTube has taken part in four monitoring rounds to assess our performance under the Code, which involves NGOs from the UK and across the EU testing our systems and responses to illegal hate speech. Industry as a whole has made vast improvements since the Code was implemented, with participating companies now assessing 89% of flagged content within 24 hours, compared to 40% in 2016. The European Commission considers the Code’s self regulation approach to be a success in improving the way platforms tackle hate speech.

 

Conclusion

 

1.              Google thanks the House of Lords Communications and Digital Committee for creating this opportunity for a thoughtful conversation about the important and nuanced issue of freedom of expression online. Dialogues such as this are needed for government, platform operators, and civil society to fulfill their shared responsibility to preserve the open internet and to counter illegal and harmful content.

 

2.              We believe that it is imperative that efforts to regulate online content consider the many different societal interests that are at play, such as freedom of expression, the rights of persons who may be targeted by harmful content online, and the many economic and cultural benefits that an open internet has brought to the United Kingdom and the world. As we have discussed in this comment, clear rules for illegal content and carefully crafted incentives for greater platform transparency and accountability regarding their treatment of lawful-but-harmful content are one way to strike this balance. We look forward to hearing from other stakeholders on these important issues and continuing this conversation.

 

 

15 January 2021

14


[1]              https://blog.youtube/inside-youtube/look-how-we-treat-educational-documentary-scientific-and-artistic-content-youtube/

[2]              This view has been supported by a number of international bodies. For example, the UN Human Rights Council (HRC) has twice adopted consensus resolutions affirming that “the same human rights that people have offline must be protected online”. See, A/HRC/38/L.10/Rev.1 (2018), A/HRC/32/L.20 (2016).

[3]              https://www.theguardian.com/newswise

[4]              https://www.thestudentview.org/

[5]              Examples of content that may fall within the lawful-but-harmful category are said to range from “trolling” to “intimidation” to “fake news.” This category of content inherently lacks clear definition and is highly subjective - with the level of potential harm varying depending on the context, location, timing specific circumstances of the individual user, and other factors. For example, some critical comments on content posted by politicians could clearly be defined as harmful “trolling,” but other such comments are likely to fall within the user’s right to free expression. Decisions on whether a comment is harmful “trolling” are often subjective.

[6]              We have encountered, for example, a reporting organisation working on behalf of a major movie studio that requested removal of a movie review on a major newspaper website; a driving school that requested the removal of a competitor's homepage from search, on the grounds that the competitor had copied an alphabetised list of cities and regions where instruction was offered; an individual who requested the removal of search results that linked to court proceedings referencing her first and last name on the ground that her name was copyrightable; a fashion company that sought removal of ads promoting authentic pre-owned handbags on trade mark grounds.

[7]              https://blog.youtube/inside-youtube/responsible-policy-enforcement-during-covid-19