Written Evidence Submitted by Professor Steve Fuller, Auguste Comte Chair in Social Epistemology, University of Warwick







The prospect of a UK ARPA offers a unique opportunity for the UK to set a clear world example in redefining what it means for science to be a genuinely ‘public good’. The financial burdens that COVID-19 pandemic have placed on the Treasury provide a further incentive. The likelihood of a tighter public purse in the near future will necessitate hard funding choices. This should inspire a McKinsey-like ‘ground zero’ auditing of whether current public investment in science is organized in such a way as to actually benefit the public. As with so many other things, our understanding of what it means for science to be a public good is ‘path dependent’, in the sense that it is conditioned by specific historical episodes, the response to which at the time has anchored subsequent judgements, which are now simply taken for granted. A UK ARPA could serve to break the path dependency in our thinking about ‘science as a public good’ if it is proposed as a replacement to the current science funding councils-based arrangement. This would ensure the right levels of investment to enable a UK ARPA to succeed.


The path-dependent understanding of ‘science as a public good’ is a product of neoclassical welfare economics of the post-World War period, epitomized by Paul Samuelson, the MIT-based author of the most widely used economics textbook in the second half of the twentieth century. Its basic assumption is that ‘public goods’ are whatever everyone needs but no one is incentivized to fund – that is, goods subject to ‘market failure’, which in turn provides a justification for state funding. On this vision, the ability of states to influence markets is quite limited, perhaps reflecting an overly parochial sense of individual self-interest and a relatively unimaginative sense of how markets might be designed to work. Moreover, in practice, this approach – while ostensibly publicly oriented – resulted in the concentration of state funding in institutions and projects that the scientific elites regarded as most worthy. The guiding assumption was that funding the ‘best people’ was the most efficient way for science to benefit everyone, perhaps via a ‘multiplier’ or ‘trickle down’ effect, depending on whether you learned economics from Keynes or Milton Friedman. In either case, the scientific establishment was largely in charge of how and to whom the money flowed, subject to largely sympathetic civil service oversight.


The UK precedent for this line of thought is the so-called Haldane Principle, named for Viscount Richard Haldane, Lord Chancellor during the First World War, who was also a well-regarded philosopher -- indeed, among the original British popularisers of the ‘new physics’ that emerged in the early twentieth century. The conclusion of his 1921 book, The Reign of Relativity, speaks of the state’s need to monitor new developments in the disparate areas of science with the aim of seizing any opportunity to harness them for the greater public good. Earlier in the book, Haldane had written presciently of the potential for atomic physics to deliver incredible wartime and peacetime technologies. Arguably Winston Churchill’s early acceptance of the atomic bomb as part of the future of warfare was due to Haldane’s insight. However, Haldane also believed that universities were best suited to deciding these matters on behalf of the public interest, a judgement he reached as part of a more general civil service reform in 1904 – that is, before the Great War and, more to the point, before the intensification of disciplinary specialisation within science, which begins at a university level only after the Second World War.


I raise this historical point because while Haldane provided an intellectual horizon for understanding  many of science’s later achievements, he failed to anticipate that science would undergo a profound sociological transformation in the twentieth century, effectively turning organized inquiry into an intellectual feudal estate governed by ‘peer review’, which is to say, fellow disciplinary specialists  whose primary interest is to extend research into areas that have already benefitted them, not necessarily the general public. I don’t mean that peer reviewed research is somehow inimical to the public good – only that peer review is designed mainly to ensure the integrity of knowledge for those who are likely to use it for research purposes. At most it provides a quality control check on knowledge produced for the public good.


Nevertheless, peer review came to dominate state science policy across the world after the Second World War under the rubric of the ‘linear model’, which effectively delegated to scientific elites the responsibility for setting science’s strategic objectives with regard to the public interest. To be sure, over time faith in this process has been eroded, a sign of which is the increasingly interdisciplinary turn in UK research council funding. It has effectively forced researchers to broaden the remit of their enquiries in order to satisfy such larger policy agenda rubrics as ‘well-being’ and ‘sustainability’. A UK ARPA could eliminate the final vestiges of the discipline-based feudalism that Viscount Haldane unwittingly unleashed in early twentieth century and which set the standard for science policy making around the world after the Second World War with the establishment of US National Science Foundation (NSF).


In Appendices A and B, I develop these arguments further. But in light of the UK’s government current interest in investing more equitably in science across the entire country, it is worth observing that the original Congressional proposal for the establishment of the NSF was exactly in that spirit. Indeed, as the US historian Daniel Kevles has pointed out, the NSF was originally considered as an extension of FDR’s New Deal, whereby the federal government would invest in university teaching and research only insofar as graduates and researchers agree to perform a kind of ‘national service’, which amounted to bringing scientific know-how to socially and economically deprived regions of the country. The model for this initiative was the successful ‘land grant universities’ of the late nineteenth century, which contributed significantly to the development of especially rural regions. Interestingly, the New Dealers had instinctively regarded universities as monopoly capitalists of knowledge, largely because most academic research up to that point (the late 1930s) had been funded by large corporations and their affiliate foundations (e.g. Rockefeller, Carnegie, Ford, Sloan). Thus, the NSF was originally conceived as a kind of epistemic trust-busting operation to ensure that the nation’s scientific capital was not concentrated in just a few places.


However, once the Democrats lost control of Congress to the Republicans after the Second World War, the more established academic interests took over, led by MIT Vice-President Vannevar Bush, whose Endless Frontier influentially made the case for the version of the NSF that exists today, largely on the back of the success of the Manhattan Project in producing the first atomic bomb. However, as I argue in Appendices A and B, it is by no means that was the correct lesson to have drawn. And here a UK ARPA could rectify matters substantially, perhaps recovering something of the New Deal’s vision. 





Belfiore, M. (2009). The Department of Mad Scientists: How DARPA Is Remaking Our World. New York: HarperCollins.

Bush, V. (1945) Science: The Endless Frontier. Washington DC: Office of Scientific Research and Development.

Fuller, S. (2000). The Governance of Science. Milton Keynes: Open University Press.

Fuller, S. (2000). Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press.

Fuller, S. (2018). Post Truth: Knowledge as a Power Game. London: Anthem Press.

Haldane, R. (1921). The Reign of Relativity. Toronto: Macmillan.

Kevles, D. (1977) ‘The National Science Foundation and the debate over postwar research policy, 1942-1945’, Isis 68: 5-26.

Samuelson, P. (1969). “Pure theory of public expenditures and taxation,” in J. Margolis and H. Guitton (eds.), Public Economics (New York: Macmillan), pp. 98–123.

Stokes, D. (1997). Pasteur’s Quadrant: Basic Science and Technological Innovation. Washington DC: Brookings Institution Press.




(Originally appeared in 28 April 2020, Times Higher Education)


The recent resignation of Mauro Ferrari as president of the European Research Council has thrown into sharp relief the distinction between ‘basic’ and ‘applied’ research today. It coincides with the start of Parliamentary scrutiny over the role that the UK government’s proposed ‘high risk, high reward’ research agency might play in the nation’s research ecology. All of this has been happening against the backdrop of the COVID-19 pandemic, and the squeeze that it will invariably place on public finances across the world for the foreseeable future. Taken together we have the perfect storm for radically rethinking why taxpayers should be funding research at all. 


According to the prevailing science policy mythology, basic research provides the securest route to generating applications of large-scale, long-term public benefit. The myth has been facilitated by a rather flexible conception of ‘research impact’ that has made great play of ‘unforeseen benefits’ over an indefinite period of time. The exact grounding of this myth has varied internationally, but the one with greatest totemic status involved the establishment of the US National Science Foundation (NSF) after the Second World War. It was inspired by MIT Vice-President Vannevar Bush’s The Endless Frontier, which explained the building of the atomic bomb which ended the war in terms of the critical mass of distinguished physicists that had mobilized behind the cause.


However, these researchers did not spontaneously self-organize into what was known as the ‘Manhattan Project’. Rather, a Princeton-based Albert Einstein contacted US President Franklin Roosevelt about a rumoured Nazi atomic bomb project. The US government in consultation with the scientists set the parameters of the project, including those eligible for participation. The scientists then went about the project in an unprecedentedly free way. It resulted in massive cost overruns, relatively little oversight and high levels of uncertainty until the first bomb was successfully detonated in a New Mexico desert. But to what extent is this impressive achievement correctly described as a triumph of ‘basic research left to its own devices’?


Bush and others backing the version of the NSF that Congress passed in 1950 certainly presumed that it did. But more importantly, they presumed that for basic research to be ‘free’, it must be devolved to the peer review processes that normally govern discipline-based academic work. However, the Manhattan Project was neither the product of discipline-based academic work nor the straightforward application of such work. It was a profoundly interdisciplinary project that involved not only physicists but also engineers and medical professionals. It took all concerned way outside their intellectual comfort zones. A more appropriate model for thinking about research of this sort – as well as the proposed ‘high risk, high reward’ UK research agency – is what Donald Stokes, a pioneer of empirical voter studies, called ‘Pasteur’s Quadrant’.


Stokes had developed a 2x2 matrix of relationships between ‘basic’ and ‘applied’ research in the 1990s as part of a prospectus on the possible directions for post-Cold War US science policy. ‘Pasteur’s Quadrant’ – named for its exemplar Louis Pasteur – refers to research driven by ‘applied’ concerns that serve to steer ‘basic’ research. As Stokes rightly saw, the story of Pasteur’s long-term contributions to science – not least in such pandemic-relevant fields as epidemiology and public health – reversed the narrative line of the science policy mythology. Moreover, Pasteur was hardly unique. It has also characterised the overall direction of travel for innovation in both science and technology in the twentieth century.


The great private foundations (e.g. Rockefeller) and corporate R&D units (e.g. Bell Labs) had been the main drivers of the signature breakthroughs in molecular biology, behavioural science, neuroscience, as well as information and communication technology, including artificial intelligence research. Of course, the researchers involved were academically well-trained. More importantly, academia was central to the normalization of these breakthroughs into the curriculum so that many more than the original funders could benefit. No one can reasonably deny the university’s centrality to the conversion of these innovations into knowledge as a public good.


However, when it comes to providing an environment for the actual conduct and evaluation of such cutting edge research, the record of universities – and especially of established academic disciplines – has been chequered, to say the least. The complaints of academic innovators about their home turf are legion and largely justified. They go beyond lack of time and funds. Peer review itself routinely confuses judgements about the validity of work judged on its own terms and in terms of some larger discipline-based agenda, which in the end may matter only to other academics.


It would be ironic if ‘basic research’ has come to be no more than a euphemism for academic work that can be ‘owned’ by academic disciplines, especially in the context of tightened public funding. In this light, the UK government’s proposed ‘high risk, high reward’ research agency should be seen as a direct challenge to the science policy mythology. Why should we presume that ‘basic research’ of the truly fundamental sort is more likely to come from the agendas of self-appointed ‘basic researchers’ than external exigencies of the sort that provide the basis for Pasteur’s Quadrant? 





Most histories of science policy point to the establishment of the US National Scientific Foundation in 1950 as the moment when what is nowadays called ‘basic research’ or ‘pure science’ came to be something that the public was expected to fund. Governments have always taken an interest in scientific research, especially a means to acquire military and economic advantage on the world stage. The early twentieth century witnessed the establishment of the Kaiser Wilhelm (now Max Planck) Institutes, which forged the first ‘triple helix’ of state-university-industry collaboration. They were designed to embed science more securely in the German national interest. The same was true even of the ‘Haldane Principle’ being promoted at roughly the same time by the idealist philosopher who had served as UK defence minister before the First World War. Haldane proposed that each university should be allotted a pot of money to decide how best to allocate resources for teaching and research. But his primary concern was not ‘pure science’ but devolving decision-making in the national interest to the level at which it could be most effectively made.


But the NSF’s rationale was different. It didn’t start from the perspective of the national interest at all but from that of the academic research community, which was portrayed as best serving the national interest by being left to its own devices. But why would anyone think such a thing? The historical answer is the Manhattan Project. This makeshift gathering of top scientists not only produced the first successful atomic bomb but also an opportunity for MIT Vice-President Vannevar Bush to promote an already widespread public faith in science to a state ideology. It was a stroke of Machiavellian genius. After all, while the Nazis and the Soviets had loudly harnessed science to their respective causes, neither claimed that the national interest was being led by the science. But this was exactly what Bush proposed – and succeeded in realizing through the NSF, whose iconic status as a science funding agency remains to this day. Among the many downstream effects of this development is that in the current COVID-19 pandemic, democratic political leaders comfortably – albeit with varying degrees of credibility – justify their exercise of unprecedented powers by claiming to be ‘led by the science’. 


Of course, Bush himself would not necessarily have approved of today’s rhetorical leveraging of science in public policy-making. Yet, the prospect of science setting the pace of politics was certainly intimated in his famed Science: The Endless Frontier, the work credited with inspiring the vision behind the NSF. In particular, Bush contributed to a narrative that was being spun by his ally Harvard President James Bryant Conant and especially Conant’s protégés, Thomas Kuhn and Robert Merton, who were among the founders of today’s history and sociology of science. According to this tale, the Nazis and the Soviets were destined to fail because they tried to force science to conform to the dictates of policy, resulting in a distorted understanding of reality. These totalitarian regimes were guilty not simply of bad strategy and tactics in advancing their objectives, or even of bad morals in the treatment of their own peoples. They were guilty of the ultimate crime: bad epistemology. In contrast, this narrative suggests, liberal societies are open to whatever the evidence says – with the proviso that what counts as evidence has been vetted by the relevant academic experts.


Thus, ‘Always more science, but never used before its time’ became a mantra that shored up the public’s commitment to scientific autonomy, while constraining policymakers’ sense of what is realizable, thereby safeguarding against the excesses and atrocities of totalitarian regimes. The NSF should be understood as a monument to this mentality. It involved a peculiar understanding of how liberal societies relate to science. Meanwhile, another understanding was brewing on the other side of the Atlantic. While the US Congress was debating the foundations of the NSF, Karl Popper had begun to promote at the London School of Economics an alternative idea of the science-society relationship. Science for him was less about establishing a disciplined grip on reality than recognizing that our grip may not be as secure as we thought. It was more about Galileo’s challenge of Church dogma than Newton’s pronouncement of the laws of nature. Popper regarded science as the intellectual cradle of liberalism, the exemplar of the ‘open society’, a society open to change its collective mind with relative ease as new evidence comes to light. Here ‘evidence’ means an outcome or event that seriously undermines prior expectations, including those based on learned prejudice.


At first, Popper’s vision looked like a highly romanticized version of Cold War liberal ideology. However, once the Soviet Union launched Sputnik, the first artificial satellite, in 1957, it became clear that he had captured something significant about the nature of science that the NSF’s ‘pure science’ orientation had overlooked. Indeed, when President Eisenhower’s Chief of Staff Sherman Adams canvassed the opinion of scientific leaders on the Sputnik launch, they reassured him that the US had already mastered the basic research behind the satellite’s construction. However, the American media did not let the matter rest on that sanguine note. They turned Popperian. They suggested that the Soviets were now capable of a level of global surveillance that threatened US national security. This view was also shared by Eisenhower’s Defence Secretary, Neil McElroy, who realized that the Soviets were envisaging outer space in a radically different way from the academic physicists, who understood Sputnik as simply a glorified application of their science.


This led McElroy to propose an ‘Advanced Research Projects Agency’ for the Defence Department focused on framing the nation’s future scientific and technological needs, or DARPA. It would not be about, say, improving current missile technology but developing the ‘next generation’ of warfare before it is strictly needed. It would be less about winning the current game and more about setting the rules for the next game. This involved a different mindset toward science, one that was arguably truer to the practice of the Manhattan Project than the version of the NSF successfully promoted by Bush and his colleagues. The underlying principle was that the ultimate test of scientific knowledge is its capacity to reorganize around an unforeseen development, thereby converting a liability into a virtue. After all, the US did not start trying to build an atomic bomb until Einstein tipped off FDR that the Nazis might be already heading down that path.


This is another way to think about learning from error, where ‘error’ means being blindsided. The US failed to launch the first artificial space satellite because it had yet to realize that the Soviets had started a ‘space race’. The US succeeded at developing the first atomic bomb only because it threw more resources at it than the Nazis who came up with the idea. In both cases, the US failed to name the game that it was playing with its opponents, even though it went on to win both games. DARPA was designed to prevent that from ever happening again. And by that standard, DARPA has been a sterling success. The internet, virtual reality and drones are among the many products of DARPA-based research that have changed the landscape of warfare – and much else, including the conduct of science itself. In that case, might DARPA have not set a better precedent for science policy than the NSF in the wake of the Manhattan Project?


This question is not merely of historical interest – at least as far as Europe, and especially the UK, is concerned. Earlier this year, Mauro Ferrari loudly resigned as head of the European Research Council – the European Union’s answer to the NSF – on the grounds that it was not fit for purpose in tackling the COVID-19 pandemic. He complained about the ERC’s ‘lack of responsiveness’, by which he meant its failure to see the pandemic as an opportunity for genuine scientific innovation. It is perhaps no accident that Ferrari was a pioneer in nanomedicine, a field that emerged in the early 2000s as part of a concerted policy effort on both sides of the Atlantic to harness various sciences in an ‘enhancement’ agenda designed to enable people to lead longer, healthier and more productive lives. Indeed, the landmark 2002 ‘converging technologies’ report of Mihail Roco and William Sims Bainbridge made it seem for a while that NSF itself might be heading in a more DARPA-like direction.


But still more to the point, the UK government has earmarked £800 million over five years for what it bills as a ‘high risk, high reward’ research funding agency explicitly modelled on DARPA, but without defence as the primary focus. The proposal could not come at a more challenging time. The combination of Brexit and the COVID-19 pandemic places an enormous strain on the public purse. Yet Boris Johnson led the Tories to a stomping parliamentary majority last December on a platform that the UK should set its own terms on the world stage. This includes taking science seriously but not deferring to the received word of experts on what is and is not possible. The pandemic has perhaps unwittingly showcased this point, since when confronted with a new, potentially lethal virus there are no ‘experts’ in any strict sense. Many different bodies of knowledge shed light on what remains an inherently amorphous situation.


The trick then is to organize this knowledge for the most enduring effect – that is, not only beating the virus into retreat but also providing a new field of inquiry upon which later scientists and policymakers might build and draw upon. Indeed, this is exactly how epidemiology emerged in response to various microbial threats to industry, the military and public health in late nineteenth century Europe.


A quarter-century ago, US political scientist Donald Stokes dubbed this kind of science, which is literally the product of a state of emergency, ‘Pasteur’s Quadrant’, named, for the French founder of epidemiology, Louis Pasteur. Stokes would turn Vannevar Bush on his head. Whereas Bush believed that a science must reach a state of maturity on its own terms before it can fruitfully tackle real-world problems, Stokes suggested that signature scientific breakthroughs come from real-world problems challenging several disciplines at once to overcome the self-imposed limitations of their inquiries. The difference in raison d’être for the NSF and DARPA could not be cast more starkly. To be sure, the two agencies have now coexisted in the US for more than sixty years. However, the current political and economic climate may turn their science policy horizons into direct competitors – if not in the US, then certainly in Europe, and especially the UK. And DARPA may end up sitting at the end of a tunnel that Vannevar Bush has had us believe for so long was an ‘endless frontier’.



AUTHOR’S NOTE: Steve Fuller is Auguste Comte Professor of Social Epistemology in the Department of Sociology at the University of Warwick. Originally trained in history, philosophy and sociology of science at Columbia, Cambridge and Pittsburgh, Fuller is best known for his foundational work in the field of ‘social epistemology’, which is the name of a quarterly journal that he founded in 1987 as well as the first of his twenty-five books. In recent years, his research has been concerned with the future of humanity in light of ‘trans-‘ and ‘post-‘ human scientific and cultural trends, as well as the future of the university as an institution. Fuller’s most recent books are Academic Caesar: University Leadership is Hard (Sage 2016), Post-Truth: Knowledge as a Power Game (Anthem 2018) and Nietzschean Meditations: Untimely Thoughts at the Dawn of the Transhuman Era (Schwabe 2019). His next book, The Player’s Guide to the Post Truth Condition: The Name of the Game, will be published by Anthem in Autumn 2020. Fuller’s works have been translated into thirty languages. He was awarded a D.Litt. by the University of Warwick in 2007 for sustained lifelong contributions to scholarship. He is also a Fellow of the Royal Society of Arts, the UK Academy of Social Sciences, and the European Academy of Sciences and Arts.


Respectfully submitted,

Steve Fuller


(28 September 2020)