House of Lords Communications and Digital Select Committee inquiry: Media literacy
Meta welcomes the opportunity to contribute to the Lords’ Communications and Digital Committee inquiry into media literacy.
At Meta, we understand the critical role that digital and media literacy play in empowering individuals, particularly young people, to navigate the internet safely. We are committed to ensuring that our platforms provide high-quality, meaningful information, and we recognise the shared responsibility of government, industry, and regulators in advancing media literacy.
How Meta Supports Media Literacy
Meta employs a multifaceted strategy to support media literacy. Our approach includes implementing policies including those related to misinformation, AI labeling and advertising; offering a range of products and features that support media literacy; collaborating with organisations and experts to support literacy initiatives, and providing resources and tools to help users develop media literacy skills.
Firstly, Meta's Community Standards are designed to prevent exposure to harmful content and are informed by expert advice in technology, public safety, and human rights. When applying these policies, we seek to find a balance between safety and allowing users to be able to express themselves on topics that matter to them.
With around 40,000 people working on safety and security issues globally at Meta, and over $30 billion invested in teams and technology in this area over the last decade, we are committed to addressing safety effectively.
Three policies of particular relevance to this inquiry include our rules misinformation, AI labelling and advertising;
Approach to Misinformation
A key component of our media literacy efforts is our policies on addressing misinformation. Our approach includes removing content that violates our Community Standards, reducing the spread of false information, and providing more context to users about the content they encounter.
We remove misinformation that violates our Community Standards or Advertising Standards. This includes misinformation that could lead to imminent physical harm and misinformation about the election process which could prevent people from voting. For example, false claims about dates, locations, times, and methods for voting or voter registration.
Recognising that misinformation can be shared even in good faith, we strive to balance enabling free expression with maintaining an authentic environment. To tackle this challenge, we have established a network of nearly 100 independent fact-checkers who review content in over 60 languages. In the UK, this includes partners like Full Fact, Reuters, and Fact Check Northern Ireland. All our third-party fact-checking partners are certified by the non-partisan International Fact-Checking Network (IFCN).
However, we received feedback in the US that fact-checking was not as effective as we hoped and was seen as biased. As a result, starting in the United States, we have ended our third-party fact-checking program and are piloting a Community Notes model. This initiative empowers users from diverse perspectives to write and submit Notes on misleading or confusing content to provide additional context. There is evidence to suggest Community Notes can be more trusted than traditional fact checkers and they can be high quality, for example a study of covid related Notes on X in 2022-23 found 97% accuracy (according to medical professionals). A study by Cornell university also found that notes on inaccurate tweets reduce retweets by half, and increase the probability that the original author deletes the tweet by 80%. Lastly we expect the community note model to offer a more scalable solution than the previous third party fact-checking model.
We are watching how the community notes work evolves in the US. There is no immediate plan to end the third party fact-checking program and roll out Community Notes in the UK at this stage. We will carefully consider our legal obligations in the UK, including under the Online Safety Act, in making any further changes. Regardless, our community standards remain in place and we continue to remove misinformation that violates these policies.
AI Labeling Policies
Another way we aim to improve media literacy is by ensuring that AI generated content is clearly labelled as such. We have implemented several measures to make the origin of AI-generated content transparent to the people that use our platforms.
When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting visible markers that you can see on the images, as well as both invisible watermarks and metadata embedded within image files.
However, since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI). We’re building industry-leading tools that can identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, as they implement their plans for adding metadata to images created by their tools.
If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so people have more information and context.
We’re also in the process of developing classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks. For example, Meta’s AI Research lab FAIR recently shared research on an invisible watermarking technology we’re developing called Stable Signature.
We also believe it is important to adopt a similar approach with political advertising that has used generative AI. That is why advertisers who run ads related to social issues, elections or politics with Meta also have to disclose if they use a photorealistic image or video, or realistic sounding audio, that has been created or altered digitally, including with AI to;
● Depict a real person as saying or doing something they did not say or do
● Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened
● Depict a realistic event that allegedly occurred, but that is not a true image, video or audio recording of the event
Advertising Policies
We have policies to ensure users are aware when they are viewing sponsored content from brands and advertisers. Our policies require creators and publishers to tag business partners in their branded content posts when there's an exchange of value between a creator or publisher and a business partner. Creators cannot accept anything of value to post content that does not feature themselves or that they were not involved in creating.
Over the past few years we’ve also introduced a number of policies that provide more information about political ads on Facebook and Instagram (in addition to the AI labelling policies outlined above). Political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, such as billboards, newspaper ads, post, leaflets or targeted emails. Advertisers who run our ads are required to complete an authorisation process and include a visible “paid for by” disclaimer - proving who they are and where they are located. We archive political ads in an Ad Library and Ad Library Report for seven years so people can find more details about an ad’s demographic reach and spend range, as well as aggregated page-level spend for political ads.
These policies are designed to enhance users' understanding of the content they encounter on our platforms, promoting transparency and informed engagement.
2. Products
Meta offers a range of products and features that support media literacy and help users navigate the online world safely.
Bringing greater transparency and improving public awareness about the content they are seeing is a priority. That is why we offer a range of transparency features on Facebook that allow users to see why we’ve built tools to help people customise what they see. These controls include:
● Content preferences: Provides options to fine-tune the way content is ranked in your Feed, including the ability to prioritize posts from your Favorites; Snooze or unfollow people, Pages, and groups to stop seeing their posts; and reconnect with anyone you may have unfollowed.
● Show more and show less: Lets you directly tell us what you want to see more or less of by selecting "Show more" or "Show less" on the posts you see. Selecting "Show more" will temporarily increase the ranking score for that post and posts like it, and selecting "Show less" will temporarily decrease its ranking score.
● Manage defaults: Allows users to adjust the degree to which we reduce sensitive or political content in their feed (our Content Distribution Guidelines outline some of the most significant reasons why problematic content may receive reduced distribution in Feed).
● Feeds tab: Allows you to see the newest posts first; content is sorted in reverse chronological order (alongside ads).
By prioritising transparency and user education, Meta is committed to enhancing media literacy and empowering users to navigate the digital landscape safely and responsibly.
We want to provide people with the ability to control their online experiences, including allowing them to block or mute other users and to disable comments across Facebook and Instagram.
We show a warning when someone tries to post a potentially offensive comment reminding them of our Community Standards and warning them that we may remove or hide their comment if they proceed. We’ve found these warnings really discourage people from posting something hurtful. For example, in a one-week period, we showed warnings about a million times per day on average to people when they were making comments that were potentially offensive. Of these, about 50% of the time the comment was edited or deleted by the user based on these warnings.
Another example for our younger users specifically is Instagram Teen Accounts, which provide a safer and more private experience for teens on Instagram. Notably, teens under 16 are automatically placed into teen accounts and unable to change any of their settings to be less strict without getting parental consent. With Teen Accounts, teens under 16 are automatically placed into more restrictive settings, limiting who can contact them and the content they see. Teens under 16 also cannot change these settings without parental permission.
We strive to enhance digital literacy for both teenagers and their parents with Teen Accounts. To achieve this, we've implemented several features: when enrolled, teens can access information within their teen safety settings page to learn about Teen Accounts and the role of supervision in their experience. We also send notifications to parents when their teen requests supervision, explaining how Instagram already protects their child by default through Teen Accounts, and outlining the benefits of supervision. Furthermore, parents can access additional resources from within the supervision dashboard to deepen their understanding of Teen Account default protections and the role of supervision.
This launch came about from our research and close consultation with academics, parents, teens, and other stakeholders. This included guidance from Meta’s Youth Advisory Council and Safety Advisory Council, as well as our ongoing co-design work and design jams.
3. Campaigns and partnerships
Meta has also actively conducted campaigns to promote media literacy among both adults and young people on a broad range of important topics. We work with internal and external experts to take a holistic approach.
UK General Election
A recent example is our campaign during the UK General Election last year, where we launched an advertising initiative on Facebook and Instagram to enhance awareness of our election integrity tools and features available across Meta’s apps. The campaign included information about our transparency tools for ads about social issues, elections, or politics, our third-party fact-checking program to help combat the spread of misinformation on Facebook and Instagram, and our tools to help stop the spread of misinformation on WhatsApp. We also highlighted how we are approaching new technologies like Generative AI and our efforts to protect the accounts of candidates.
Fraud, Scams and Financial Literacy
As well as supporting the government’s own Stop! Think Fraud campaign through the provision of ad credits we also launched a number of UK-specific campaigns to tackle scams and fraud. Fraud is a particularly adversarial harm type, which is designed to evade the protections we put in place on our platforms and to be difficult for its intended victim to spot. Because of this, we have long invested in user education campaigns, including with Citizen’s Advice, Trading standards, Media Smart and UK Finance on dedicated campaigns designed to increase awareness of the most common types of fraud.
Just this week we are sharing tips and product tools across Facebook Messenger, WhatsApp, and Instagram to help people avoid online investment and payment scams and improve financial literacy.
● On Messenger, we may display warnings when there are requests or offers to pay in advance of shipping an item, requests to pay using instant payment methods or signals that an account is engaged in suspicious activity.
● On Facebook, Instagram, and WhatsApp, you can complete a privacy check-up, and update your settings to choose who can contact you and who can see your personal information, like online status, profile photo, and activity. This helps avoid unwanted contact from people you don’t know who may be scammers.
Youth Literacy Programmes
Meta also collaborates with external organizations to advance media literacy for young people. For instance, Meta is a long-term supporter of Media Smart, the advertising industry’s award-winning education non-profit with a mission to ensure that every child in the UK, aged 7 – 16, can confidently navigate the media they consume, including being able to identify, interpret, and critically evaluate all forms of advertising. The Media Smart program provides free teaching resources for schools, parents, and directly to young people—on subjects like social media, digital advertising, body image, influencer marketing, creative careers, piracy, and managing your online advertising experience. The skills-based program is designed to develop the key life-skills of resilience, empathy, critical thinking, communication, and creativity.
Internationally, we have also partnered with UNESCO to launch a media literacy campaign in Bosnia, empowering young users with critical thinking skills to be resilient to online harmful content. Additionally, our partnership with Coursera and Certiport provides scholarships and access to digital literacy curricula, furthering our efforts in digital literacy education.
4. Educational Resources
Finally, we offer a range of educational content and tools, such as online safety guides, to help users develop media literacy skills. We have a Digital Literacy Library, which is a collection of interactive lesson plans, designed by experts to help young people develop the skills needed to navigate the digital world, critically consume information and responsibly produce and share content.
We have worked to equip teachers, parents and children in the UK with better digital skills and to improve their digital literacy. For example we have specific resources for teens in our Youth Safety Center covering topics such as dealing with bullying, how to handle intimate images shared without your permission and understanding privacy.
For parents, we have the Family Center, which has received a significant update around Teen Accounts, including the development of a new parents guide: Family Center. Parents are able to learn about the built-in protections, customisable features and parental supervision tools available across our technologies and how to create a healthy media environment at home.
Other digital literacy resources, including "Get Digital," "My Digital World," and "We Think Digital," are now part of UNICEF’s Learning Passport, making them widely available to learners globally.
The Evolution of Media Literacy Over the Next Five Years
As the media landscape and technological advancements continue to evolve, media literacy will need to adapt to address new challenges. The increasing prevalence of AI-generated content necessitates the development of skills to identify and critically evaluate such content. As mentioned above, Meta is actively working on developing classifiers to automatically detect AI-generated content and collaborating with industry partners to establish common standards for identifying AI-generated media.
Furthermore, media literacy education must encompass a broader range of digital skills, including understanding the implications of emerging technologies like the metaverse. Meta is investing in initiatives such as the XR Programs and Research Fund to explore responsible design and digital literacy in the context of immersive technologies.
The Adequacy of the UK's Regulatory and Legislative Framework
The UK's regulatory and legislative framework plays a crucial role in helping ensure media literacy. We welcome the clause in the Online Safety Act that gives Ofcom a duty to promote media literacy through educational activities and guidance.
Meta has long supported the development of this framework, sharing the UK Government's objective of making the internet safer while protecting its vast social and economic benefits. Over the last several years, we have worked with both the UK Government and Parliament to support the development of the Act. We have been vocal that new and innovative regulatory frameworks must strike a complex balance between safety and people's rights, such as freedom of speech and providers need to take their share of the responsibility to support how to strike this complex balance.
We have welcomed Ofcom's approach throughout the wider legislative process to develop this regulatory framework by focusing on how to implement the safety duties, as well as the clear objective to develop guidance and Codes of Practice based on the principles of proportionality. We have been supporting them to get ready for this exercise for the last three years.
In Meta’s experience, a collaborative and interrelated approach is crucial to addressing online harm. As is set out in Ofcom’s three-year Media Literacy Strategy, heightening public awareness to ensure users are equipped to protect themselves online is vital. Using the methods we’ve set out above, Meta will continue to prioritise and evaluate methods to ensure our users can detect mis and disinformation on our platforms, in addition to protecting content of democratic importance including around elections.
We believe that regulation by itself isn’t the answer to all harms on the internet. A collaborative approach involving government, industry, and civil society is necessary to create a safer online environment. Meta is committed to working with the UK Government and our partners to support programs that educate and empower internet users across the UK to manage their online safety.
Conclusion
As detailed above, Meta remains dedicated to advancing media literacy through our policies, educational resources, and collaborations. We believe that a comprehensive approach, involving all stakeholders, is essential to equip individuals with the skills needed to navigate the digital world safely and responsibly. We hope that our approach helps inform Ofcom’s priorities in its 2024 Media Literacy Strategy to ensure that platforms promote best practice in prioritising users’ media literacy.
Ultimately, ensuring a media-literate society requires a collaborative effort between government agencies, regulators, technology companies, and civil society organisations including Media Smart. We look forward to further working with Ofcom and the Government as the Online Safety Act and measures to promote media literacy are implemented. We are committed to working together to address these complex issues and ensure that our platforms are safe and trustworthy for all users.
May 2025
10