Financial Timeswritten evidence (MLI0051)

 

House of Lords Communications and Digital Select Committee inquiry: Media literacy

 

 

The Financial Times (FT) is a UK-headquartered, global business comprising 18 distinct enterprises. These include the 135-year-old, fearlessly pink Financial Times; FT Live, which convenes live events with key decision-makers across growth sectors; FT Locations, an inward investment data business; FT Specialist, which provides a large range of specialist titles such as Investors Chronicle, a retail investment publication; and Endpoints, which specialises in biotech and pharma.

 

At the FT, our business is built on a belief in the fundamental value of professional journalism, which serves society far beyond the jobs we create and the taxes we contribute to the British economy. For more than 135 years, the FT has reported on and analysed economic and political affairs in Britain and worldwide, upholding rigorous editorial standards and complying with applicable laws such as defamation, data privacy and intellectual property laws. FT journalism empowers investors, businesses, policymakers, and citizens to make informed decisions - whether through print, online, or our app.

 

In an era of increasing information chaos, FT journalism provides clarity and certainty to a paying readership of 1.3 million. The FT is a truly global business - while our headquarters and majority of operations are in the UK, more than 50% of our subscribers are based internationally. Through our FT schools programme, the FT is committed to providing every 16-19 year old student, anywhere in the world, with access to FT journalism. The FT also supports the FT FLIC (Financial Literacy and Inclusion Campaign) to educate and lobby for policy improvements related to financial literacy.

 

Introduction

 

The FT welcomes the opportunity to provide a response to the Lords Communication Committee’s enquiry into media literacy in the UK. The ability for UK citizens to have the skills to navigate the information ecosystem is crucial to our democracy. The media regulator, Ofcom, now defines ‘media literacy as the ability to use, understand and create media and communications across multiple formats and services”’. Ofcom notes that these services are constantly evolving, playing a more influential role in virtually all aspects of our daily life while also introducing new risks, particularly relating to online safety and mis- and disinformation.

 

As with previous waves of technology disruption, the FT is seeking to evolve its business for the world as it is, and the world to come. News organisations can’t control how online platforms change their approach to news distribution. Nor would we wish to dictate what news they must consume. Our focus is on what we can control, principally maintaining the quality of FT journalism: continuing to place a high degree of importance on the trust that the public has in FT journalism for its accuracy and quality. This approach appears to be working, as the latest Reuters Institute Digital News Report notes that the FT is the most trusted non-broadcast news organisation in the UK, placing fourth behind the BBC, Channel 4 news and ITV news.

 

What has changed in the age of AI is the ability to control how our journalism is used, in both the training of AI models, and in the display of outputs purporting to be from the FT. We are entering an age of digital dumping, in which AI developers flood the market for news and information with outputs that are created using generative AI models in response to natural language user prompts. GenAI models use statistical probability to produce outputs that are a combination of words that are designed to be sufficiently plausible to suggest to the reader that those words are correct. This probabilistic approach to the production of outputs is as far away from the process of producing high quality journalism as it is possible to be. Digital dumping floods is flooding the market for news and information with outputs that purport to be news, but which are in fact low quality simulacrums of the journalism that it seeks to replace.

 

Digital dumping allows social media platforms to profit from paid-for promoted posts featuring trusted FT journalists that seek to scam social media users. Recent incidents of paid-for deepfake Instagram posts featuring cloned video and audio of FT chief economics commentator, Martin Wolf, demonstrate how scammers use a combination of AI technology and systemic loopholes in the social media business model to undermine the trust and reputation and brand of leading journalists and news brands. The publish-first ask questions later approach taken by social media platforms enables those companies to generate vast amounts of advertising revenue, keeping margins high by investing the lowest amount possible on human judgement as to whether such posts are seeking to exploit platform users, or to undermine the reputations of individuals and companies who are illicitly engaged in those scam posts.

 

Efforts to maintain control of how news intellectual property is exploited by GenAI developers and online platforms is vital if citizens are to be able to trust the news and information they receive online. Such control is also vital if companies like the FT are to be able to continue generating revenues for reinvestment in the independent professional journalism that underpins the wider ecosystem of news and information that serves the people, policymakers and companies that are key to the UK’s economic growth.

 

The need for strong media literacy skills is increasing in the age of AI

 

Media literacy skills are even more essential as AI technology evolves with ever greater speed. Over recent years, trusted news brands have sought to maintain engagement with audiences in an environment in which online platforms have sought to mediate the relationship between sources of news, and the reader of that news. The impact of such mediation is to make the source of news and information unclear, fragmenting and undermining long-term trust in the news consumed by the public online. Engaged relationships with readers are essential to ensure that consumers understand that the news and information they are reading comes from a trusted source, and that those sources are a sustainable going concern. The challenges posed by fake news on social media platforms has been well documented by various select committees of the House, and most recently in the book, Careless People, by former Facebook employee, Sarah Wynn-Williams.

 

In contrast to online platforms and GenAI developers that do not accept legal liability for the content that they publish, journalism published by trusted news publishers like the FT is written and edited with care to ensure that it complies with our editorial code, as well as the UK’s defamation, data protection and intellectual property laws. The care and attention paid by the FT to journalism published over our 135-year history is the reason why the public trusts the FT brand. It is also one of the key reasons why our journalism is used as a training component in leading AI models.

 

While the age of search and social media raised concerns about the mediation of readers from the source of that journalism, the challenge posed by the age of GenAI powered digital dumping for media literacy are arguably far more significant. Search engines grew by aggregating material published by third parties on the open web to enable consumers to choose which material to consume. This changed with the advent of social media platforms, which target third posts to users based on characteristics and viewing history. The integration of GenAI technologies into those platforms now fundamentally changes the role of those platforms, making platform owners active editors and first party creators of material that is tailored to what the platform knows about the user. This is a huge change in role which should bring new legal responsibilities.

 

As we note above, incumbent tech companies are currently rolling out experimental generative AI (GenAI) technologies within services such as trusted search engines, and less trusted, but widely used social media platforms. The combination of these technologies has significant potential to further undermine the trust citizens place in what they read, watch and hear online.

 

While the FT has set out ambitious plans to ensure that colleagues across the FT group are literate or indeed fluent in the use of AI, our editor-in-chief, Roula Khalaf, has also been clear in a letter to our readers that FT journalism in the new AI age will continue to be reported and written by humans who are the best in their fields and who are dedicated to reporting on and analysing the world as it is, accurately and fairly.

 

The rapid deployment of AI technologies within commonly used digital products and services and the dangers that digital dumping will have on the news and information ecosystem is a primary concern for citizens.

 

        A survey for Pew Center research in the United States found that inaccurate information, impersonation and data misuse are common worries for both experts and the public. For example, 66% of adults overall and 70% of experts are highly concerned about people getting inaccurate information from AI.” The same survey found that a majority of US adults (62%) and experts (53%) say they have not too much or no confidence that the US government will regulate AI effectively. In the absence of government regulation, “59% of the public and 55% of surveyed experts have not too much or no confidence in US companies to develop and use AI responsibly.

 

        In the UK, the government has commissioned multiple waves of polling to discover public attitudes to AI. The latest wave of that polling found that 44% of the public predict that AI will have a negative impact on the trustworthiness of the news and information published online. These concerns are particularly prevalent among those with no (50%) or little (48%) knowledge about how data is used to train AI systems.

 

        Recent research from the Reuters Institute at Oxford University also notes high levels of concern about the spread of misinformation online, and the use of AI technologies for creating fake content, with almost 9 in 10 people raising concerns across the 8 countries surveyed.

 

 

 

        Respondents to the survey in the UK were the most likely to suggest that the government should play a role in regulating GenAI technologies, significantly more than other forms of technology in the last wave of technological evolution.

 

 

 

Because GenAI is a technology based on the probability of a string of words being correct, GenAI models do not have editorial judgement or understanding. This can lead to the mangling or jumbling of words from source material in ways that distort meaning, potentially creating liabilities that had otherwise been carefully avoided.

 

        As BBC research into leading AI chatbots has revealed, in summarising journalism from trusted sources they are generally prone to produce significant inaccuracies, and generate misleading headlines. The legal position on who is responsible for the production of such misleading material remains unclear and is a vexatious negotiating matter in commercial deals with AI businesses.

 

        In the context of discussions about the licensing of journalism for use by AI developers, researchers at Columbia University have found that the nature of GenAI technologies means that errors in outputs containing licensed journalism persist. A March 2025 study by the Tow Center at Columbia University into AI chatbots found similarly inaccurate and misleading outputs from 8 of the leading frontier models, regardless of whether those services are provided at no cost or via a subscription. The Tow Center research found that chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead. Premium chatbots provided more confidently incorrect answers than their free counterparts.

 

GenAI developers know that models built using historic training data are prone to creating misleading output that can pollute the information ecosystems in which their technologies operate. This is why they use a technique called Retrieval Augmented Generation, or RAG, to ground outputs in response to user inputs using information made available to the model in real-time. As the scientist and Meta AI lead Yan LeCun has explained, RAG acts as the short-term memory of GenAI models. It is absolutely essential to any hope that GenAI models can produce outputs that contain accurate information. Without this short-term memory, GenAI powered chatbots and search engines would have to rely on their long-term memory: the data on which the underlying GenAI models were trained. While this long-term memory provides vital context for the model’s operation, it does not contain material about events happening today. Without RAG, GenAI models would produce outputs by interpolating from data in its long-term memory. These outputs would be a complete fabrication, yet would likely provide a level of credibility that the user may consider them to be real.

 

Despite the high commercial value of RAG to AI developers, the vast majority of companies take the raw materials required to create summarised simulacrums without any form of remuneration, licensing arrangement, or traffic back to the source publisher website. This is contrary to the terms of service of many publishers, and is neither fair nor sustainable.

 

Not only do these practices risk misleading citizens about the nature and reliability of the outputs posing as news, in doing so they seek to replace the consumption of high-quality journalism via the source websites of the originators of that journalism. This digital dumping risks undermining the financial viability of the news publisher brands whose journalism is actually vital to ensuring that we maintain a coherent and verifiable body of high quality news and information on which democratic decision-making depends. Digital dumping therefore poses more existential risks than simply the profit or loss of a single news organisation. Unlicensed use of journalism as RAG may only be stopped by publishers finding ways to block all forms of scraping or, in extremis, completely removing journalism from the open web. Such retrenchment from widespread distribution would have negative financial consequences for news brands. But it would also have terrible consequences for the open web, as the diverse ecosystem of trusted journalism we have seen emerge over the last 20 years, is replaced by an ever growing mass of fabricated GenAI output.

 

What is even more concerning is that while in the context of physical goods like steel imports, national governments can impose tariffs and port controls, in the context of digital dumping, the ability for news brands to prevent GenAI developers scraping their journalism and using their brand is presently minimal. Even if news publishers use existing web standards to opt-out from the use of their journalism and brand in GenAI outputs, the Tow Center report has confirmed that these requests are ignored. The study notes that multiple chatbots seemed to bypass Robot Exclusion Protocol preferences, that the models would fabricate links and cited syndicated and copied versions of articles.

 

The FT is enthusiastic about the role of GenAI models in generating creative ideas, solving repetitive tasks, and being used to build bespoke models that can experiment to propose novel solutions to scientific problems. But in the context of news and information, the tendency for GenAI models to produce material that contains inaccuracies is not a design flaw. If you ask a GenAI model the same question about news and events in the world today 100 times, you are likely to receive 100 different answers. As the Tow Center study makes clear, such errors are present even where licensing arrangements with GenAI companies, which allow news brands the opportunity to influence the development of new products, provide no guarantee of accurate citation in chatbot responses. The age of digital dumping requires societies that care about the veracity and reliability of their news and information to raise defences in concerted ways: a clear focus on raising the media literacy of all citizens will be a key defence in this context.

The challenge of digital dumping for existing media laws

 

In its 2022 paper on the future of media plurality, Ofcom warned in a world of intermediaries as gatekeepers, curating or recommending news content to online audiences, it is not clear that people are aware of the choices being made on their behalf, or their impact. The same paper suggested that there may be a case for new remedies – including tools to provide greater transparency over the choices intermediaries make and to give people more choice about the news we see – to ensure we continue to secure the benefits of the UK’s diverse and vibrant news media.

 

Of the suggestions for regulatory changes made in the 2022 paper, we are not aware of any work being taken forward by government departments to address these issues of transparency or control of news distribution by key gatekeepers. The only recommendation from the 2022 paper that we are aware of being taken forward was a November 2024 consultation by the present government on expanding the merger regime to take into account the potential merger of online magazines.

 

Focusing attention on such low hanging fruit is understandable in an environment in which government Ministers feel that they should act with a “sense of humility” when negotiating with online platforms whom we should treat like “nation states”. In its mission to secure “growth”, the government is urgently pursuing AI developers in the hope of securing investment in data centres, as well as seeking to avoid conflict with the US government to prevent the further escalation of trade conflicts. Yet it is still not considering how online platforms are disrupting the news and information ecosystem on which UK citizens rely to make decisions in a democracy.

 

The Committee will be aware that Ofcom data shows that Instagram (41%), YouTube (37%), Facebook (35%), TikTok (33%) and ‘X’ (27%) comprise the top five news sources used by 16-24-year-olds. News from the BBC iPlayer (23%) and BBC One (23%) now rank below that of social media platforms amongst that age group.

 

Research by Channel 4 examines these trends in more detail, finding that truth’ is no longer a given. Confronted with a fragmented information ecosystem, Gen Z often take a ‘magpie’ approach, building their interpretation of the world from diverse, sometimes contradictory, sources… [this] raises urgent questions: if trust in established institutions continues to erode, what does that mean for civic engagement, democracy, and social cohesion?

 

As we set out above, in the age of genAI, digital dumping will increase the challenge of determining what is true or false will increase. While the ability to understand why a story or response is provided to a user will also become much harder. Decisions about outputs created by GenAI models are directed by the policies that underpin those models. These policies can be tweaked to prevent criticism of individuals, including political figures on whose favour model owners rely. Transparency and clarity on how conflicts of interest are executed in the policies that underpin GenAI technologies are unclear to experts, let alone to the public. We do not believe that media regulators presently have the ability to require transparency of how those underlying policies are designed and operate in practice.

 

A further challenge that should be considered in the context of media literacy is that GenAI models have a tendency to respond to users with increasing degrees of sycophancy, seeking to please the user with responses that the model considers the user will like, rather than providing the user with details about the world as it is. This is a trait which goes beyond potentially valuable forms of news personalisation to providing users with answers they want to hear. Ultimately GenAI models are needy: prioritising user engagement over clear and honest responses that might lead to a user ceasing engagement with that model. The research team at Anthropic has found that this trait may not be isolated to a single model, but rather AI assistants frequently wrongly admit mistakes when questioned by the user, give predictably biased feedback, and mimic errors made by the user. The consistency of these empirical findings suggests sycophancy may indeed be a property of the way RLHF models are trained. Issues with recent model releases by OpenAI appear to confirm this broader trend across GenAI models.

 

Providing clarity in the age of digital dumping

 

The Committee’s inquiry is happening at a crucial time. The role of digital technologies, algorithms and platforms in shaping the views, attitudes and behaviours of UK citizens is presently in the spotlight due to a piece of art, and the public response to the implications of that work. Concerns about the impact of largely unregulated outputs from online platforms on citizens are not new. The way in which the toxic views of influencers have permeated online culture - particularly of young men - have been raised over a number of years.

 

In the absence of government action to address challenges to the UK news and information ecosystem, these issues will get worse before they get better. Digital dumping will flatten news and information further, making it ever more difficult for citizens to distinguish an output purporting to be journalism, from the actual journalism from a trusted news brand.

 

In an age in which GenAI models can be designed to persuade users of a point of view that is unknown to the users themselves, there is a real danger that citizens' media literacy skills will be tested beyond their limits. While the government is focused on inviting US investment in the UK to enable the training of GenAI models here, it appears unfocused on the implications of digital dumping on the news and information on which the UK population relies.

 

New research by the government’s own digital service suggests that this lack of urgency may be due to a lack of literacy within government about the capabilities and flaws of GenAI models. A recent blog from the UK Government Digital Service (GDS) suggests that there are misconceptions of what AI is and what it might achieve and that it is essential for civil servants to understand the potential and limitations of different AI technologies when considering if AI is the right tool for the job. The role of media literacy in the age of GenAI is crucial to demystifying the strengths and weaknesses of these technologies. The need to avoid generalisations about the capabilities of LLMs is also reflected in research published in 2024 by research teams at MIT, which found that Language models that get better can almost trick people into thinking they will perform well on related questions when, in actuality, they don’t.

 

There is evidence that lessons in critical thinking and media literacy in an age of GenAI can help to overcome the challenges posed by the creation and dissemination of mis- and disinformation. A recent academic study looking at the development of truth discernment among adolescent children found that overcoming cognitive biases and engaging in analytical thinking could be one process lying at the root of media truth discernment during adolescence but that other processes, such as general knowledge and/or media literacy, might also be involved in the task of identifying real news in particular. These findings suggest media literacy programmes amongst school-aged pupils can have a positive long-term effect on critical thinking.

 

At the same time, emerging research suggests that in the rush to deploy and use GenAI models across the economy and in public life, key literacy skills can be undermined. Early findings from teams at Microsoft and Carnegie Mellon University, have found that the use of GenAI technologies can result in humans using less critical thinking, which can result in the deterioration of cognitive faculties that ought to be preserved.

 

GenAI technologies have the potential to drive advances in a range of fields across the economy. But as everyone from the UK Prime Minister downwards calls for AI technologies to be mainlined into the UK economy and society, it is vital that Parliamentarians highlight the strengths of these technologies, but also the risks where these technologies could take our society backwards. Advocation for a concerted media literacy strategy that arms every UK citizen to understand these strengths and weaknesses, and the fundamental differences between GenAI simulcrums of news, and journalism from trusted news brands, would be a key first step.

 

 

Key questions

 

1. What are the overall aims of delivering media literacy in the UK?

 

a. How would you define media literacy? What would ‘good’ media literacy look like?

 

b. What are the risks and consequences of not achieving these aims?

 

We broadly agree with Ofcom’s definition that media literacy includes “the ability to use, understand and create media and communications across multiple formats and services”. We believe a major challenge in the age of AI will be on the ‘understand’ aspect of Ofcom’s definition. This spans both developing an understanding of the limits of genAI when producing outputs, but also an understanding of the policy rationale that underpins the reason why genAI models are producing outputs in the way that they are. We are currently unaware of any government plans to assess the underlying policies of commonly used general purpose LLMs to understand how underlying conflicts of interest are managed.

 

The risks of not achieving the aim of media literacy is that individuals become distrustful of all forms of media. A recent study by the Alan Turing Institute examining the impact of AI on the 2024 UK elections found that evidence of impacts linked to an increasingly degraded and polarised information space, which create second-order risks outside of just the election process. These include: confusion over whether AI-generated content is real, damaging trust in online sources; deepfakes inciting online hate against political figures, threatening their personal safety; and politicians exploiting AI disinformation for potential electoral gain. The long-term consequences of these dynamics remain uncertain.” The report continues, noting the significant challenges that “online users face in distinguishing AI-generated material, the proliferation of these information sources will make it increasingly difficult for individuals to determine the legitimacy of a particular source. This is especially true if the site integrates genuine stories alongside false ones, or individuals exploit the financial gains made from luring companies into paying for professional ads on attention-grabbing AI news sites.

 

A significant challenge for consumers in a world of increasingly GenAI powered search, is that the social contract that existed between key online gateways such as search and social media platforms, is breaking down as GenAI technology is integrated into commonly used platforms. Many trusted news brands are seeking to prevent their journalism and brand being scraped and used by those services: partly on the basis that those services provide no direct licensing revenue for the use of that journalism; deliver minimal traffic to first party web domains of news publishers, and: have the propensity to mangle and distort the journalism of news brands through the act of GenAI powered summarisation.

 

The lack of any licensing revenue, and the absence of users clicking through from those services to the source of trusted journalism means that there is little to no incentive to distribute through GenAI powered gateways. The nature of GenAI powered search engines is that they will seek to scrape content from well-known news brands to give an air of authority. Our own observation is that when GenAI models are promoted to scrape from trusted news sources such as the FT, they may be partly successful. For example, they may successfully scrape a headline, or part of a news story, but that fabricate the remaining portion of an output that may appear convincing to the end user.

 

In order to provide the user with what the model deems to be a “useful” output, a GenAI model may reach into its training data, or long-term memory, to interpolate between different sources within that training data to create a news-like output that looks like it could credibly be from the FT, but is in fact, completely fabricated. This has the potential to mislead users, who may have no reason to expect that GenAI technologies would operate in this way, other than a disclaimer about their unreliability buried in the terms and conditions of the product.

 

If trusted news sources were successful in preventing the use of their brand and journalism in totality, this could create a source vacuum into which providers of GenAI generated news slop step in to fill the gap where trusted information should be. Again, this should be set in the context of the public having built up a high-degree of trust in search interfaces over many years.

 

When combined with the government’s current preferred option of creating a new UK exception to allow AI businesses to text & data-mine news content without remuneration, this could lead to the further hollowing out of the professional news media industry in the UK. Digital dumping by genAI developers poses a fundamental challenge to media literacy, and risks undermining the UK’s democratic institutions.

 

c. What indicators or evidence would demonstrate improvement?

 

The FT’s internal programme to develop and run a literacy programme to upskill colleagues on their understanding and use of genAI, has included online training workshops, community discussion groups and meetups. We have measured progress and the impact of AI Literacy and Fluency training in a range of ways including:

 

        Through stories and case studies of people’s use of AI, with qualitative or quantitative measures.

 

        Learner feedback.

 

        Number of people engaging with Learning activities.

 

        Number of people engaging with assessment.

 

        Number of people considered “AI Literate” or AI Fluent”.

 

2. How well are existing UK media literacy initiatives working, and how could they be enhanced?

 

a. How are responsibilities currently split between different stakeholders, such as the Government, industry, and civil society, and could improvements be made to these arrangements?

 

Alongside investment in high quality journalism, the FT runs or funds a variety of schemes that seek to enhance the media literacy of young people in the UK and beyond.

 

Our FT Schools programme - supported by our new sponsor Nomura - provides free access to FT.com for schools anywhere in the world, teaching 16-19 year old students. Currently, FT schools operates in 132 countries. The goal of providing access to the FT is to help with learning, essay writing, exams and broadening knowledge to improve performance in interviews for further study and provide guidance on employment. Our programme is growing, with more than: 5,600 schools signed up; 70,000 user accounts created, and 9.6mn page views from 22,000 engaged users throughout 2024.

 

In addition to the provision of access to our journalism, the FT Schools Hub features teacher recommended articles relevant to Economics, Business, Politics, Philosophy, Psychology, Financial Literacy, Theory of Knowledge and Geography. Teachers also share questions after each article to encourage students to test their knowledge. Students are also able to share articles they are reading, ‘like’ articles, leave comments and recommend FT articles to other users. More information and testimonies about the value of the FT Schools programme is attached as an appendix to this short submission (here).

 

Our experience of working with schools to deliver FT schools sessions has been positive. The degree of engagement with the initiative and the opportunities to engage students with the sessions is largely dependent on the engagement and leadership of teaching staff within the school setting. We recognise that the pressures on teachers to educate children are multiple, and therefore without there being dedicated space within the curriculum, there is a risk that media literacy training is squeezed out by other topics.

 

We note the suggestion in the recent Demos paper on epistemic security on the need to ‘revamp the UK’s approach to media literacy in schools to serve epistemic security’, using the Finnish model as an exemplar. We agree that Ofcom’s inclusion of teacher training in its three-year media literacy strategy, alongside the Department for Education’s ongoing Curriculum and Assessment Review, presents a key opportunity to redesign the approach to civic, digital and media literacy across subjects. As a commercial investor in high quality news and information, the FT would love to work with Ofcom and the DfE to help contribute to the design of this approach. We know that members of our editorial staff are highly motivated to volunteer for media literacy activities and engage with students directly in school settings.

 

b. Which other actors (including online platforms) have a role to play in improving media literacy in the UK?

 

As we note above, GenAI technologies have real strengths when it comes to generating ideas, creative content, and for hypothesising about potential avenues of scientific discovery. They are inherently unreliable when it comes to producing news and information. Companies that are producing simulcrums of news articles need to provide users with much clearer disclaimers to make clear that their models are unreliable in this context.

 

The FT has worked with a range of UK and global news organisations to push for much clearer labelling of outputs, including more prominent attribution and linking to articles that have been used in the creation of GenAI outputs. The appearance of news and information produced by leading GenAI developers is constantly evolving, largely without the input of the news brands whose content is providing the raw materials for these outputs. We are unaware of any regulatory oversight of this evolving area of news generation and consumption by media watchdogs such as Ofcom.

 

3. How will media literacy need to evolve over the next five years to keep up with changes in the media landscape and technological advancements?

 

The expectation is that genAI technologies will be built into all commonly used search and social media gateways, including gateways such as search. As recent research by the Reuters Institute at Oxford University has outlined, the public is sceptical about news and information made available through most online platforms with just “over one third trust video networks (37%), and just under one third trust messaging apps (31%). Social media is trusted by 30% and generative AI by 27%.” The outlier in the Reuters research is search which is “trusted by a slight majority (55%).” This disparity is also reflected in the latest edition of Ofcom’s news consumption data, which again shows search as an outlier.

 

Our expectation is that over time, search engines will evolve from being gateways through which users are transported to the source of a news story, to becoming a destination in which summaries of news stories are consumed. With the introduction of GenAI powered search, the analysis company Gartner predicts that organic search traffic volumes will decrease by 25% by 2026. This will have significant implications for the financing of investment in human authored journalism. Research released in April by AHrefs analysed 300,000 keywords and found that the presence of an AI Overview in the search results correlated with a 34.5% lower average clickthrough rate (CTR) for the top-ranking page, compared to similar informational keywords without an AI Overview.

 

A submission to the Lords Communications Committee inquiry into the ‘future of journalism’, by the search engine Google, suggested that in 2014, Google sent more than eight billion visits to European news publisher websites each month each one of the clicks that Google sends to them is worth between 4 and 6 euro cents. The report suggested that the total value of this web traffic was 208 million Euro in the UK. A loss of 25% of traffic referral from search would logically lead to a reduction in revenues of 52 million Euro, or £44.7 million. Based on NCTJ data suggesting that the average salary of a journalist working in the UK today is £34,500, this loss of revenue equates to the salaries of almost 1300 journalists.

 

But as important as the financial implications of AI-powered search are for the news industry, the likelihood that users will be retained within the search user interface, rather than viewing source content directly increases the risks of users being exposed to mis and disinformation. The combination of the display of summarised news and information within what has hitherto been a trusted source of news and information, combined with the unreliable nature of AI-generated search outputs - particularly when users are searching for niche or new issues - raises challenges for even media literate citizens.

 

5. How adequately is the UK's regulatory and legislative framework delivering media literacy?

 

a. What is your assessment of Ofcom’s media literacy strategy?

 

b. What further action is needed from the Government, if any?

 

c. Are changes needed to legislation, for example the Online Safety Act 2023 or the Media Act 2024?

 

There is an urgent need for the government to consider the measures required to safeguard the future of news and information consumed by citizens in the UK. The age of GenAI powered digital dumping will challenge the goal of media literacy, but also other key principles of the Communications Act 2003 including the maintenance of a plurality of news providers, and ensuring transparency of the ownership and supply of news and information to the British public.

 

Despite this need for urgent action, the government is focused instead on proposals to undermine the ability for news organisations to control how their journalism and brand are used by weakening the intellectual property (IP) rights of individuals and companies. This IP give away is an effort to drive investment in data centres by the same GenAI firms that are engaged in the digital dumping that is polluting the UK’s news and information ecosystem, and undermining the ability for domestic news brands to invest in the journalism that seeks to maintain a clean supply.

 

The challenges to security and sustainability of the supply of high-quality news and information is not a problem unknown to the government. A 2020 Alan Turing Institute report on epistemic security in the UK, written in conjunction with the Defence Science and Technology Laboratory - an agency of the Ministry of Defence - and the Centre for Existential Risk at Cambridge University, noted the growing threat from adversarial use of technology to influence decision making in the UK. The report suggested that only long-term investment in technological and institutional solutions, are likely to present effective strategies for evaluating and mitigating threats to a democracy’s epistemic security. This security is critical to a society's ability to organize collective action on the basis of timely and reliable information in a technologically advanced world.

 

The age of digital dumping should be ringing alarm bells within the government about the long-term risks to democratic decision making in the UK. Yet there appears to be no holistic thinking on updates required to prepare the UK’s regulatory frameworks in order to secure the future of high-quality news production. Nor are any questions being asked about the business models that allow social media companies to profit from paid promotion of deepfake posts that seek to trade on the brand equity of trusted journalists.

 

A recent report by Demos, focused on the UK’s Epistemic Security in 2029, suggests a range of changes to legislation to secure the UK’s news and information ecosystem. These include requirements for greater access to data from social media platforms for research purposes, closing loopholes in the new Online Safety Act legislation, and finally bringing social media companies within the ambit of the foreign ownership requirements set out in the Communications Act 2003.

 

In terms of the harms caused by the combination of AI technologies with paid promotion on social media platforms, these issues were meant to be addressed by the last government’s online advertising programme. We are unclear about the status of this programme, or whether it is taking into account news harms that are emerging from the age of digital dumping. The enforcement of know your customer (KYC) obligations on search and social media platforms could be a key part of the solution to preventing the use of paid promotion of online posts to distribute deepfake posts such as those the FT has sought to take down in recent weeks. By way of example of legislative measures under consideration in other countries, we understand that the European Council is currently considering changes to the Payment Services Regulation that could force greater KYC obligations in relation to advertising that relates to the promotion of financial services to EU users.

 

The forthcoming BBC Charter Review provides a key opportunity to safeguard the future of BBC news, and to maintain an impartial news service for the UK public. The Review should also reflect questions about how the BBC can work with other public service broadcasters, and wider commercial news brands such as the FT to deliver trusted news to the UK public through key online gateways.

 

The commencement of the Digital Markets and Consumer Competition Act (DMCCA) was a welcome step forward in providing the UK’s competition regulators with the new powers to enforce fair trading, transparency and choice in digital markets. The FT was a co-signatory of a letter to the Prime Minister raising concerns about suggestions that the government may weaken news laws, including the DMCCA, as part of efforts to assuage the new US administration. The letter urged the Prime Minister and others in government to reflect on the long-term benefits that a safer online space, and fairer digital markets will bring to our society and economy. And consider the future ramifications for our sovereignty if it becomes clear that laws passed by our parliament can be traded away. Our sovereignty and our democracy are not for sale.

 

 

May 2025

16