Written Evidence Submitted by Professor Emma Marsden and Dr Cylcia Bolibaugh, University of York

(RRE0017)

 

We work in the area of applied linguistics, with a focus on the learning of languages (second, foreign, additional languages after the first language). This is a multidisciplinary field, sitting at the intersection of social sciences (education), arts & humanities (linguistics, languages) and learning sciences (psychology, including neuroscience).

 

We write in our capacity as Director (Emma Marsden) and Co-Director (Cylcia Bolibaugh) of two open research and impact initiatives: IRIS (Instruments and materials for Research Into Second languages) and OASIS (Open Accessible Summaries In Language Studies).

 

IRIS (https://www.iris-database.org) is a unique, searchable and freely accessible repository of materials for research into how additional (second, third etc.) languages are learned, taught, and used. Launched in 2011, IRIS aims to improve the openness, replicability and ease of conducting research, by enhancing the field's capacity to scrutinise and evaluate research quality. It also aims to open up the research process for teachers and teacher educators.

 

OASIS (https://oasis-database.org/) is an open (CC BY-NC-SA) database of non-technical summaries of high-quality research. OASIS, launched in 2018, aims to bridge the gap between science and practice. The project serves to radically change dissemination practices by sustaining a culture of systematic production of accessible summaries of research.

 

We focus our response to the committee’s request for evidence on the following topics from our perspective within the field of applied linguistics.

 

The breadth of the reproducibility crisis and what research areas it is most prevalent in:

Prevalence of replication studies

Replication studies play a central role in the accumulation of evidence for or against a hypothesis. Within applied linguistics, there are still relatively few replication studies published:

 

       A large synthesis of replication research in our field (Marsden, Morgan-Short, Thompson, & Abugaber, 2018) found that approximately 1 in 300 published articles were self-labelled replications. We found no close or exact replications and no reproductions of analyses.

 

Availability of data elicitation materials and study protocols

The availability of data elicitation materials and study protocols underpins the development of systematic lines of research. When materials are available, researchers can evaluate the comparability of constructs and their operationalisations across studies. The current lack of transparency regarding instrumentation and protocols presents an important threat to the quality of replication efforts:

 

 

 

       A synthesis of replication studies in second language learning (Marsden, Morgan-Short, Thompson & Abugaber, 2018) found that only 3 of the original 67 studies that were replicated had provided all of their materials.

       In a methodological synthesis of the use of self-paced reading in studies investigating adult bilingual participants, Marsden, Thompson, & Plonsky (2018) found that only 4% of 71 eligible studies had full materials available, and 77% gave just one brief example of stimuli.

       A survey of instrument availability across three journals in second language research found that only 17% of instruments were available between 2009 and 2013 (Derrick, 2016).

       Only 36% of judgement tests (a very common elicitation technique in the language sciences) were accessible, whether in primary reports (22% - behind paywalls), on IRIS - our open repository (8%), and/or elsewhere such as an author’s website (3%). 64% of the JTs  were not available. There was no evidence of real improvement in availability over time: 1970s = 33%, 1980s = 57%, 1990s = 46%, 2000s = 30%, 2010s = 35% (Plonsky, Marsden, Crowther, Gass, & Spinner, 2020).

       A lack of reporting of reliability inhibits the use of existing instruments and materials for replication, as their reliability is not known. E.g., Plonsky et al. (2020) found that only 16% of the studies reported instrument reliability.

       Marsden (2019) found that just 13% of materials that could be available were actually openly available.

 

Availability of data and code underpinning published findings

Sharing of data and code underpins computational reproducibility, and is necessary for the verification of individual studies, as well as for the carrying out of meta-analyses. Failure to share data results in a cumulative loss of research value as findings cannot be incorporated into research syntheses and meta-analyses.

       Within applied linguistics, meta-analyses routinely have to exclude large numbers of studies that did not report complete information, and/or failed to provide underpinning data (see Figure 1).

 

Figure 1. Reports from L2 meta-analyses that counted the number of studies excluded from the analysis due to missing data (expressed as a % of the included sample). From Larson-Hall & Plonsky (2015)

 

       Obtaining data from researchers is notoriously difficult, due to broken email links and researchers not archiving their data in sustainable and usable formats. For example, of the 255 authors contacted by Plonsky, Egbert, & Laflair (2015), only 36 (14%) replied with the requested datasets, of which only 25 were usable.

 

Open access publications

Our field is very slow to make their work available, despite the ‘deals’ with publishers and the UKRI’s funding at universities to support open access. For example, Alferink, Andringa, & Marsden (2021) found that in the last 3 issues before April 2021, in the five major journals in the field of second language learning, 79% of the articles were behind paywalls. Just 25/117 were open access (from 3 Wiley journals, 1 Cambridge University Press journal, and 1 Sage)

 

 

The issues in academia that have led to the reproducibility crisis:

 

Current practices for small-scale research studies carried out by taught students (undergraduate and postgraduate) as part of degree programmes contribute to the climate in which irreproducible studies are accepted as normal practice. Student dissertation projects are underpowered (due to lack of time and resource), and frequently use research instruments that have not been validated. A far better use of student time, and a more fruitful learning experience, would be for students to engage in supervised replications, or collaborative research projects. Small-scale examples of this type of model (e.g. Button, 2018) should be embedded within programmes that contain an empirical dissertation or thesis.

 

 

The role of the following in addressing the reproducibility crisis:

 

       research funders, including public funding bodies

 

Research funders have a critical role in supporting the field-specific open digital infrastructures which are needed to support research reproducibility.

 

In our review of the breadth of the reproducibility crisis within applied linguistics, we emphasised the necessity for full disclosure of data and code as well as full provision of experimental materials and protocols.

 

In order for a research field to fully benefit from access to data, protocols, instruments and materials, these outputs must be available within domain-specific repositories such as IRIS. Domain-specific materials repositories increase the comparability of sources of data; for example, once uploaded to IRIS, materials are associated with rich, searchable meta-data that enable meta-research on constructs and methods. While broader platforms (such as OSF) have their uses for facilitating working practices and workflow, and larger repositories like the UKDS provide a host of useful services, cumulative and synthetic research is dependent on searchable access to a comprehensive database of materials within a field. Importantly, IRIS, and other field-specific repositories, fulfill this function by remaining independent of funder, journal, or country.

 

Open digital infrastructures like IRIS and OASIS start at the grass roots level because they are needed by a field; but if there is no sustainable plan for supporting them, they will inevitably fold. While the independence of field-specific repositories underpins their value, it also creates a vacuum as to who should support them. National funders like UKRI are uniquely positioned to both provide the responsive funding needed to support these initiatives, and to signal their importance to the broader research community (including the higher education institutions that usually host them).

 

 

       individual researchers

Incentives for individual researchers are currently misaligned in several ways: 1)  investing the time needed to develop and apply open practices to research is not rewarded and incurs opportunity costs, 2) carrying out “team science”, e.g. group publishing for large multi-site studies that ensure replicability,  is similarly undervalued, 3) publishing replication studies is discouraged through a culture that values “ground-breaking” and novelty over cumulative and synthetic development of lines of research.

 

While these incentives operate at the individual level, the solutions are situated at the institutional and funding levels, e.g. promotions need to pay systematic and ‘real’ attention to open practices, and viable career trajectories for specialists (e.g. experimental officers, RSEs, research data analysts) are needed.

 

       publishers

There needs to be a concerted effort to reduce the power of the large publishing houses over successful (high impact) journals. The current publishing infrastructure has created a vicious cycle that needs to be broken whereby high impact journals discourage publications (such as replications) that they fear may lower their impact factor. Examples such as The White Rose publishing house need to be supported - this may have to be from central government, via the institutions (redirected from library subscriptions).

 

what policies or schemes could have a positive impact on academia’s approach to reproducible research:

       Additional specialised support within universities and research institutes is needed to help academics adopt open research practices. Roles such as data stewards, and research software engineers, are now essential to making research outputs other than manuscripts open, accessible and reusable. It is not realistic to expect that individual academics will master the skills underpinning each of these domains of expertise, and careers in specialised research support need to be made viable within HE structures.

       Funding based on a Registered Report model  - whereby the review process focuses on the rationale, design and methods and open practices - should become standard. Once funded, data collection can proceed and the results must be published regardless of whether the findings are ‘statistically significant’.

       A number of measures can be taken to decrease the rewards for ‘novelty’  and support replication research:  examples include funding, promotion, and supporting journals that publish replication research, (e.g. funders could facilitate the publication of research that they fund. Current examples of this model include the Royal Society through RSOS and the ERC  through Open Research Europe).

       Open access to research outputs can be improved by supporting society-led publishing efforts, and non-profit publication platforms. Examples include Ubiquity press - an affordable and high quality publishing platform and open access publisher of peer-reviewed academic journals, books and data providing the infrastructure and services to enable university and society presses to run sustainably and successfully, and the Open Library of Humanities (https://www.openlibhums.org/).

how establishing a national committee on research integrity under UKRI could impact the reproducibility crisis

A committee on research integrity should focus its efforts on ensuring the soundness of the research record, and creating the social conditions that allow all the good and critical functions of science to flourish. It should do so by seeking to influence the strategic decision making of key stakeholders, namely by:

-          making recommendations on funding (as above);

-          developing recommendations for rewarding Open Science in university infrastructures, e.g. support of open initiatives, rewarding open practices;

-          helping to break the cycle between high-impact publications, and lack of support for publication of replications, by supporting open access initiatives that will help researchers transition from paywalled publications, and publish guidelines that incentivize researchers to carry out, and publishers to publish replications.

 

REFERENCES

Alferink, I., Andringa, S., & Marsden, E. (2021, August 15-20). Using OASIS summaries to facilitate a dialogue between research and pedagogy [Conference presentation]. AILA World Congress, Groningen, The Netherlands.

Button, K. (2018). Reboot undergraduate courses for reproducibility. Nature, 561(7723), 287. https://doi.org/10.1038/d41586-018-06692-8

Derrick, D. J. (2016). Instrument reporting practices in second language research. TESOL Quarterly, 50, 132–153. https://doi.org/10.1002/tesq.217

Larson-Hall, J., & Plonsky, L. (2015). Reporting and interpreting quantitative research findings: What gets reported and recommendations for the field. Language Learning, 65 (S1), 127–159. https://doi.org/10.1111/lang.12115

Marsden, E. (2019, March 8-13). Open science and applied linguistics: Where are we and where are we heading? [Invited Plenary]. American Association of Applied Linguistics 2019 Conference, Atlanta, the United States.

Marsden, E., Morgan-Short, K., Thompson, S., & Abugaber, D. (2018). Replication in second language research: Narrative and systematic reviews and recommendations for the field. Language Learning, 68, 321–391. https://doi.org/10.1111/lang.12286

Marsden, E., Thompson, S., & Plonsky, L. (2018). A methodological synthesis of self-paced reading in second language research. Applied Psycholinguistics, 39, 861–904. https://doi.org/10.1017/S0142716418000036

Plonsky, L., Egbert, J., & Laflair, G.T. (2015) Bootstrapping in applied linguistics: Assessing its potential using shared data. Applied Linguistics, 36, 591–610. https://doi.org/10.1093/applin/amu001

Plonsky, L., Marsden, E., Crowther, D., Gass, S. M., & Spinner, P. (2020). A methodological synthesis and meta-analysis of judgment tasks in second language research. Second Language Research, 36 (4), 583-621. https://doi:10.1177/0267658319828413

 

 

(September 2021)