Written evidence submitted by Professor Robert Evans (COM0039)

Authorship

1)     This submission is written by Professor Robert Evans. The ideas and recommendations listed below arise from discussions with colleagues at the Centre for the Study of Knowledge, Expertise and Science at Cardiff University and are endorsed by several of them. This submission is, however, written in an individual capacity.

Executive Summary

2)     This submission is particularly concerned with ‘the extent to which public dialogue and consultation is being effectively used’ and ‘the strategies and actions being taken by Government to foster public engagement and trust of science more widely.’ It argues that:

a)      There is academic research on the nature of scientific knowledge that should inform how scientific work is used in decision-making

b)     It is important distinguish between seeking expert advice, promoting public engagement and measuring popular opinion.

c)      Where measures of popular opinion are needed, priority should be given to obtaining representative samples

d)     If public engagement events are to generate informed opinion they should be based on a dialog between experts and citizens

e)      Where expert advice is sought, policy-makers and citizens need to know the content and strength on any consensus that exists within the expert community

f)       The assessment of expert consensus is a matter for both practitioners and social scientists, with social science having a particularly important role to play in identifying non-traditional forms of expertise

g)     Expert advice should not determine policy and nor should it be used to avoid or disguise political choices; any public justification of policy must accurately represent the degree of expert consensus that exists at that time.

h)     Government bodies should lead by example and ensure that, where they do seek and use expert advice, they do so in accordance with these principles.

Introduction

3)     This submission is primarily concerned with the role of Government in fostering public understanding of and engagement with science. It draws on academic research from the field of Science and Technology Studies (STS) in which the UK is a world leader.[1]

4)     The submission is divided into three sections:

a)      Theory and Research: sets out the main academic ideas that inform the recommendations

b)     Diagnosis and Evaluation: uses examples to illustrate key tensions and problems

c)      Recommendations: implications of current research for practice of Select Committees and others seeking expert advice

Theory and Research

5)     Work in STS can be divided into two main approaches. The first stresses contingent nature of scientific knowledge and the importance of recognising that public concerns about science are not necessarily driven by ignorance. This approach typically challenges any rigid demarcation between science and politics and argues for more inclusive and/or participatory forms of decision-making. Evidence for the influence of this approach can be found in the Science in Society report published by the House of Lords Select Committee on Science and Technology in 2000 and in the GM Nation? Debate held in the summer of 2003.[2]

6)     The second approach – Studies of Expertise and Experience (SEE) – is more recent but is gaining in interest and influence with its founding paper now cited over 1600 times.[3] Although sharing a similar starting point to the other approach, SEE reaches very different conclusions about relationship between science and policy. In particular, SEE argues that scientific expertise, like all expertise, is created in and through the work of social groups. Crucially, however, those outside a particular group have not shared in its experiences and so lack the expertise needed to critique its practices. Although not all scientists recognise it, this limitation applies as much to the way scientists evaluate others as it does to the evaluation of science.

7)     SEE can be distinguished from the STS that informed the GM Nation? Debate in two ways. The first is that meaningful public participation in technological decision-making requires that the questions put to citizens match their ability to answer them. Where the questions are related to political preferences or other everyday matters, then widespread participation poses no problems and is preferable to expert debate. If this is not the case, and some degree of specialist knowledge is needed before an opinion can be offered, citizens need to be supported in developing that knowledge. SEE argues that this support should only be sought from individuals or groups with the relevant expertise and that this may mean recruiting experts from outside the scientific community.[4]

8)     The second way in which SEE differs from other approaches to STS is in insisting that the ‘political’ elements of technological decision-making must be kept separate from the ‘technical’ elements. In this context, the ‘technical’ element concerns establishing what the relevant expert community believes is known with certainty (e.g. it is now highly likely that human activity is causing climate change) whilst the ‘political’ element concerns how to act as a result of this knowledge (e.g. what is the appropriate balance between adaptation and mitigation).[5] This is obviously difficult to achieve in practice, as many of the issues that animate the public sphere will include aspects of each, but the aspiration is vitally important as it provides the standard against which practice can be judged.[6]

9)     None of this makes SEE is an argument for technocracy. Although political decisions should be informed by the best available expert advice, SEE argues that policy-makers must retain the right to discount expert advice and choose a different alternative. The only constraint SEE imposes is that policy-makers must not miss-represent the expert advice they have received. In other words, policy-makers are free to reject the consensus view of experts but, if they do, citizens must know that this is what has happened.[7]

Problems of Science in Policy

10) The emergence of SEE can be traced to the problems that arose following the recognition that there pockets of highly relevant but non-scientific expertise in the wider society. These can be summarised as follows:

a)      STS and the Problem of Legitimacy: These problems typically occur when policy-makers or other decision-makers give too much weight to formal scientific knowledge and fail to recognise its limits or assumptions. The solution is to restore legitimacy by recognising non-scientific forms of expertise. Standard examples include environmental or medical issues, where the detailed but context-specific, knowledge of workers, patients or local citizen is dismissed as anecdotal by formally accredited experts and given less weight than data generated in laboratory or similar settings. Many of the public controversies around technological decision-making discussed in the academic literature have been of this form, with the iconic examples being the treatment of sheep farmers following the Chernobyl accident and the controversy over the risks posed by 2,4,5,T.[8] A more recent example is the treatment of ‘fenceline communities’ that live in close proximity to petrochemical plants in the US.[9]

b)     SEE and the Problem of Extension: These have been less common but this might be changing. Here the problem is that, once the door to increased participation has been opened in order to solve the problem of legitimacy, it is not clear how to close it again. As a result, the idea of expertise is gradually eroded as all opinions, regardless of who holds them and on what basis, are seen to be equally valid.[10] Perhaps the most well-known example of this problem in the UK is the controversy over the MMR vaccine, in which anti-vaccine advocacy groups were able to sustain a controversy in the public domain even though the medical community was unanimous in saying there was no scientific evidence to support their claims.[11]

11) The dilemma of decision-making when one is not an expert oneself – which is almost always the case – is how to know who to trust when different groups are saying different things. STS and, in particular, SEE can help with this in two distinct ways:

a)      The description of science provided by STS shows that some degree of uncertainty is a characteristic of all but the most settled and uncontroversial sciences. Controversy is therefore quite normal and not a sign of pathology. The desire for ‘clear and simple’ statements that mask this reality can lead to a public miss-representation of science that fuels problems of legitimacy. In contrast, accepting that science is often uncertain would lead to more realistic expectations of what science can and cannot deliver.

b)     SEE explains why, even if there is a controversy, it is not necessary to treat all opinions as equal. Specifically, the expertise and experience of those claiming that a particular piece of science is uncertain needs to be considered before giving credence to their views. It is only those who are actively engaged in the domain – typically scientists but also practitioners, patients and others with specialist knowledge – who have the experience needed to make informed technical judgements.[12]

12) Absolute safety and zero risk are not realistic goals; what is needed is an assessment of the consensus that exists within the expert community and this judgement can only be made by those with the expertise needed to understand the strengths and weaknesses of the available evidence. The challenge for policy makers is therefore to find mechanisms through which the content and, more importantly, the degree of expert consensus can be assessed and made visible to the public.

Recommendations

13) As a major consumer and commissioner of expert advice, the Government has a responsibility to lead by example. This means seeking, using and presenting expert advice – be it scientific or otherwise – in a way that reflects contemporary social science research on the nature of science and expertise. In practical terms, this means that Government bodies should distinguish carefully between processes that seek to gain expert advice, encourage public engagement and measure popular opinion. All three are perfectly legitimate objectives but require different methods and serve different purposes.

14) Where popular opinion is needed, obtaining a representative sample is more important than obtaining a large response. This is illustrated by the GM Nation? Debate where the opinion poll conducted by the evaluation team provides a more accurate and reliable measure of public opinion that the self-completion surveys provided by participants.[13]

15) Where public engagement is required, these events are best run as dialogs. Where this does not happen, participants often – and perhaps rightly – see the event as being more concerned with the manipulation of public opinion rather than as an attempt to listen to it.[14] In some cases, it will be necessary to build in time for participants to learn about the science or technology in question and there are many different participatory or deliberative methods that can facilitate this.[15]

16) Where expert advice is needed then it is important to distinguish between the ‘technical’ and ‘political’ elements of the policy problem as expert advice is only needed to resolve technical concerns; political issues require political solutions.

17) Where policy decisions are a matter of public concern, it is almost inevitable that experts will disagree. Neither citizens nor policy-makers can resolve this debate and it is most unlikely that the experts will reach agreement either. What is needed, therefore, is an understanding of the social groups claiming expertise and the extent to which their claims should be regarded as credible.

18) The assessment of expertise is a matter for both social scientists and practitioners: practitioners are needed to evaluate and critique each other’s claims and social scientists, particularly those specialising in STS, are needed to help define the pool of relevant experts. Experience shows that without this input from social science, natural scientists may fail to recognise non-traditional forms of expertise.[16]

19) The outcome of this assessment should be a summary of the consensus with the expert community. This assessment should include both the content and strength of any consensus. The intuition is that a strong consensus provides a sounder basis for policy making than a weak consensus.

20) Technocracy must be avoided. This means that even a strong consensus cannot determine policy. There is, however, one way in which the assessment of consensus should constrain policy-makers: policy makers must not miss-represent the consensus in their public statements. Thus, where policy makers reject a strong consensus, or build a strong policy on a marginal view, they must say that this what they are doing.[17]

 

April 2016


[1] See e.g. REF 2014 Panel C report which states that ‘Substantive areas in which large numbers of high quality outputs were submitted included: race and ethnicity, with especially interesting work on migration and borders: health and biomedicine; and social studies of science’ (para 11, p. 92). Available from http://www.ref.ac.uk/panels/paneloverviewreports/

[2] House of Lords, “Science and Society: Third Report of the House of Lords Select Commitee on Science and Technology” (London: Stationery Office, February 23, 2000), http://www.publications.parliament.uk/pa/ld199900/ldselect/ldsctech/38/3801.htm; Tom Horlick-Jones et al., The GM Debate: Risk, Politics and Public Engagement (Abingdon, UK; New York, NY: Routledge, 2007).

[3] Harry M Collins and Robert Evans, “The Third Wave of Science Studies: Studies of Expertise and Experience,” Social Studies of Science 32, no. 2 (April 1, 2002): 235–96, doi:10.1177/0306312702032002003.

[4] Robert Evans and Alexandra Plows, “Listening Without Prejudice?: Re-Discovering the Value of the Disinterested Citizen,” Social Studies of Science 37, no. 6 (December 1, 2007): 827–53, doi:10.1177/0306312707076602.

[5] Harry M Collins, Martin Weinel, and Robert Evans, “The Politics and Policy of the Third Wave: New Technologies and Society,” Critical Policy Studies 4, no. 2 (July 28, 2010): 185–201, doi:10.1080/19460171.2010.490642.

[6] Harry M Collins, “In Praise of Futile Gestures: How Scientific Is the Sociology of Scientific Knowledge?,” Social Studies of Science 26, no. 2 (May 1, 1996): 229–44, doi:10.1177/030631296026002002.

[7] Martin Weinel, “Primary Source Knowledge and Technical Decision-Making: Mbeki and the AZT Debate,” Studies in History and Philosophy of Science Part A 38, no. 4 (December 2007): 748–60, doi:10.1016/j.shpsa.2007.09.010.

[8] Brian Wynne, “Misunderstood Misunderstanding: Social Identities and Public Uptake of Science,” Public Understanding of Science 1, no. 3 (July 1, 1992): 281–304, doi:10.1088/0963-6625/1/3/004; Alan Irwin, Citizen Science: A Study of People, Expertise, and Sustainable Development, Environment and Society (London ; New York: Routledge, 1995).

[9] Gwen Ottinger, Refining Expertise: How Responsible Engineers Subvert Environmental Justice Challenges (New York: New York University Press, 2013).

[10] Collins and Evans, “The Third Wave of Science Studies”; Harry M Collins and Robert Evans, Rethinking Expertise (Chicago: University of Chicago Press, 2007).

[11] Tammy Boyce, Health, Risk and News: The MMR Vaccine and the Media, Media and Culture, v. 9 (New York: Peter Lang, 2007).

[12] For a similar view see BBC Trust, “BBC Trust Review of Impartiality and Accuracy of the BBC’s Coverage of Science” (BBC Trust, n.d.), http://www.bbc.co.uk/bbctrust/our_work/editorial_standards/impartiality/science_impartiality.html.

[13] Nick F. Pidgeon et al., “Using Surveys in Public Participation Processes for Risk Decision Making: The Case of the 2003 British GM Nation? Public Debate,” Risk Analysis 25, no. 2 (April 2005): 467–79, doi:10.1111/j.1539-6924.2005.00603.x.

[14] Rob Hagendijk and Alan Irwin, “Public Deliberation and Governance: Engaging with Science and Technology in Contemporary Europe,” Minerva 44, no. 2 (June 2006): 167–84, doi:10.1007/s11024-006-0012-x.

[15] For reviews see Julia Abelson et al., “Deliberations about Deliberative Methods: Issues in the Design and Evaluation of Public Participation Processes,” Social Science & Medicine (1982) 57, no. 2 (July 2003): 239–51; John Church et al., “Citizen Participation in Health Decision-Making: Past Experience and Future Prospects,” Journal of Public Health Policy 23, no. 1 (2002): 12–32; G. Rowe, “A Typology of Public Engagement Mechanisms,” Science, Technology & Human Values 30, no. 2 (April 1, 2005): 251–90, doi:10.1177/0162243904271724.

[16] Alan Irwin and Brian Wynne, eds., Misunderstanding Science? The Public Reconstruction of Science and Technology, 1th paperback ed (Cambridge: Cambridge University Press, 2003).

[17] Weinel, “Primary Source Knowledge and Technical Decision-Making”; Collins, Weinel, and Evans, “The Politics and Policy of the Third Wave.”