Written evidence submitted by Lucy Hitcham and Dr Richard James

We are writing in response to the call for evidence for this Select Committee inquiry on the effects of screen time on education and wellbeing. We are academic researchers that have studied the impact of technology on mental health and wellbeing in a range of different contexts. This line of research has recently focused on technology use and screen time on smartphones, and the impacts on mental health and smartphone addiction.

As the Chair’s comment highlights, there are specific risks that can emerge from certain interactions with technology that may create or exacerbate harm. The new affordances that accompany novel technologies create the potential for new harms or exacerbate existing ones for people vulnerable to harm.

However, we are concerned that such harms have been incorporated in the broader category of ‘screen time’. We believe that this framing is fundamentally unhelpful and detracts from developing policy that will be effective at tackling some of the serious impacts that technology has the potential to facilitate.

Scope of response

The evidence we present in reply to the call for evidence is directly focused on the second question:

What is the current understanding of how screen time can support or impact children’s wellbeing and mental health, including the use of social media?   

Although the evidence we present is not directly focused on this, this evidence by extension has key implications for the first question as well:

What is the current understanding of how screen time can support and impact children’s development and educational outcomes, including the effect on concentration and behaviour?   

Throughout our response to this consultation, the key message we emphasise is that there are insurmountable problems with the concept of ‘screen time’, and how it is measured that means that it is simply not useful as a target for intervention. The existing research base has significant limitations associated with it that prevent the development of effective policy instruments. As such, we must conclude that the current understanding of these impacts, especially when then are negative, is deeply inadequate. Instead, policy should focus on specific activities that may be harmful, unhealthy or dangerous instead of technology use in general.

 

Response to call for evidence

The response to the call broadly falls into three topics. Points 1-5 outline the problems with screen time as a concept and how it is measured. These issues lay the groundwork for points 6-8, which demonstrate how these problems limit our understanding of the consequences of screen use. Finally, in points 9-11, the implications and recommendations from these are discussed.

  1. Screen time isn’t well defined. Researchers in the field have described defining screen time as ‘conceptual chaos’ and ‘mayhem’ (1). It is not difficult to see why. There is agreement that screen time refers to the volume of technology use but there is disagreement on fundamental issues. This includes whether certain types of device are included (or not), how those devices are used (i.e. sedentary vs non-sedentary use), and the specific types of screen use. As a result, it has been claimed that technology use and impact measurements form an amorphous, “fuzzy” construct (2).
     
  2. Most research does not use valid measures of screen time. We recently reviewed how technology use was measured in the smartphone addiction literature (3). We examined 1305 studies, of which 74% sampled from school (<18) and student (~18-21) populations. We found that just 10% of studies measuring smartphone use collected data logged from users’ phones (i.e. their objectively measured screen time). The remaining 90% used self-reported data to measure screen time. Despite logging tools becoming embedded in operating systems (i.e. Screen Time, Digital Wellbeing), the percentage of studies using logged data is falling over time. While smartphone addiction forms one part of the wider knowledge base about the impact of screen time, this specific problem is common across other impacts as well (4, 5).
     
  3. There is little consensus on the measurement of screen time. In our review (3), we found there was significant variation in how participants are asked to report their screen time, and the range of devices covered by screen time assessments. Studies used a very wide range of different response categories for assessing screen use. For instance, across existing research, the lowest category of screen use varied from less than 30 minutes, to less than an hour, two hours or in one case, less than five hours. Similar diversity occurs with other measures (e.g. types of screen time, times devices are picked up), which are likely to bias how people respond to these, and how screen time measures relate to mental health and wellbeing. In other cases, addiction scales were used as indices of screen time. Together, these highlight the lack of a ‘gold standard’ measure of screen time.
     
  4. Common measures of screen time are inaccurate. Self-report measurements of screen time have been widely shown to be inaccurate. Across multiple forms of screen use such as social media (5), smartphone use (4), and internet activity (6), self-reports of screen time correlate poorly with actual usage. Most notably, a prominent meta-analysis by Parry and colleagues (4) found the correlation between self-reported and logged measures of screen time was 0.38. Given these estimates are supposed to measuring the same behaviour, this suggests that self-reports, and consequently most research, are not accurately measuring screen time.
     
  5. Measures of screen time are biased as well as inaccurate. Findings from studies that use logged data demonstrate that people are inaccurate in self-estimating their screen time. In addition to being inaccurate, there is increasing evidence that this error is not random and is determined by factors such as the device/behaviour being assessed (6), the type of question asked (5), and the other features of the group being asked to estimate their screen time (7).
     
  6. These biases affect the relationship between screen time, mental health and wellbeing. Studies that have used the best available practices (e.g. logged screen use, pre-registered hypotheses, specifying impact of different analysis choices) have found that the impact of screen time on mental health and wellbeing is much smaller than studies that have not (8). In our own research (9), we found that some of the impacts of smartphone use did not replicate when we re-tested the same sample just two weeks later.
     
  7. Longitudinal impacts of screen time are limited. In studies that have tracked participants for longer periods (e.g. months or years), evidence from recent meta-analyses suggest the impact of screen time is very small (10), and observed only in relation to depression. This study also highlighted a lack of longitudinal research into other mental health conditions as well.
     
  8. These impacts may not be caused by screen time. Our most recent research has aimed to understand how the biases outlined in point 4 affect the relationship between screen use and mental health outcomes, such as depression. In this work (11), we found that the greatest mental health impacts (in depression) were found in people who overestimated their phone use. This suggests that the measurement biases in point 4 may determine the link between screen time and mental health. There is limited evidence this is due to reverse causality (10), but instead suggests that there is a common factor or bias affecting both screen time and measures of wellbeing, artificially inflating the link between screen time and harm.
     
  9. Intervening on screen time alone is unlikely to be effective. As noted in points 6-8, when more rigorous methodologies are used, the effect of screen time on mental health and wellbeing is either decreased or not observed. Therefore, broad practices that focus on limiting screen time are unlikely to lead to significant improvements to wellbeing.

    We also note that as an alternative to screen time, alternative models such as ‘digital wellbeing’ (12) have been proposed to capture both positive and negative impacts of technology use. Importantly, these models integrate a wider understanding that can explain why broad interventions may not be effective. In addition, the implication of this approach is that simply acting on screen time is likely to benefit some, but harm others.
     
  10. There are opportunities to improve the evidence base. While this response highlights existing problems with our understanding, there are actions that can be taken to improve this base. One possibility would be to include measures of logged screen time in upcoming waves of the cohort studies supported by the Department for Education and led by the Centre for Longitudinal Studies. For instance, it should be noted that the ongoing Children of the 2020s is already making extensive use of smartphone technology in measuring child behaviour. It is feasible to propose that provision is made for collecting detailed screen use data in future waves as this and future cohorts emerge. There is similar potential for validating alternative screen time measures e.g. time use diaries utilised in cohorts such as the Millennium Cohort Study, which has been used extensively to assess screen time in adolescents.
     
  11. Focus instead should be placed on specific types of use. Instead of screen time, it would be preferable to promote and focus efforts on specific types or contexts of technology use. This may include specific types of online activity, or types of content, some of which have been outlined in the call for evidence. Doing so, in conjunction with improving the evidence base, has greater potential to identify practices or interventions that improve wellbeing or prevent harms than managing excessive screen time.

 

 

REFERENCES

 

1.              Kaye LK, Orben A, Ellis DA, Hunter SC, Houghton S. The conceptual and methodological mayhem of “screen time”. International Journal of Environmental Research and Public Health. 2020;17(10):3661.

2.              Davidson BI, Shaw H, Ellis DA. Fuzzy constructs in technology usage scales. Computers in Human Behavior. 2022:107206.

3.              James RJE, Dixon G, Dragomir M-G, Thirlwell E, Hitcham L. Understanding the construction of ‘behavior’ in smartphone addiction: A scoping review. Addictive Behaviors. 2022:107503.

4.              Parry DA, Davidson BI, Sewall CJR, Fisher JT, Mieczkowski H, Quintana DS. A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nature Human Behaviour. 2021;5(11):1535-47.

5.              Ernala SK, Burke M, Leavitt A, Ellison NB, editors. How well do people report time spent on Facebook? An evaluation of established survey questions with recommendations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems; 2020.

6.              Scharkow M. The Accuracy of Self-Reported Internet Use—A Validation Study Using Client Log Data. Communication Methods and Measures. 2016;10(1):13-27.

7.              Sewall CJ, Parry DA. The role of depression in the discrepancy between estimated and actual smartphone use: A cubic response surface analysis. Technology, Mind, and Behavior. 2021.

8.              Orben A, Przybylski AK. The association between adolescent well-being and digital technology use. Nature Human Behaviour. 2019;3(2):173-82.

9.              Hitcham L, Jackson H, James RJE. The relationship between smartphone use and smartphone addiction: An examination of logged and self-reported behavior in a pre-registered, two-wave sample. Computers in Human Behavior. 2023:107822.

10.              Tang S, Werner-Seidler A, Torok M, Mackinnon AJ, Christensen H. The relationship between screen time and mental health in young people: A systematic review of longitudinal studies. Clinical Psychology Review. 2021;86:102021.

11.              James RJE, Jackson H, Bonn S, Russell M, Hitcham L. Negative affective traits and the discrepancy between perceived and logged smartphone behavior: A response surface analysis.

12.              Vanden Abeele MMP. Digital Wellbeing as a Dynamic Construct. Communication Theory. 2021;31(4):932-55.

October 2023