Digital, Culture, Media and Sport Committee
Oral evidence: Connected Tech: Smart or Sinister?, HC 157
Tuesday 1 November 2022
Ordered by the House of Commons to be published on 1 November 2022.
Watch the meeting
Members present: Julian Knight (Chair); Kevin Brennan; Steve Brine; Clive Efford; Julie Elliott; Damian Green; Dr Rupa Huq; John Nicolson; Giles Watling.
Questions 147 to 215
Witnesses
I: Dr Matthew Cole, Post-doctoral Researcher, The Fairwork Project; Dr Asieh Hosseini Tabaghdehi, Senior Lecturer in Strategy and Business Economics, Brunel University London; and Dr Efpraxia Zamani, Senior Lecturer in Information Systems, University of Sheffield.
Written evidence from witnesses:
– [Add names of witnesses and hyperlink to submissions]
Witnesses: Dr Matthew Cole, Dr Asieh Hosseini Tabaghdehi and Dr Efpraxia Zamani.
Q147 Chair: This is the Digital, Culture, Media and Sport Select Committee and our latest hearing into connected tech. Smart or sinister? That is the question we are asking.
We are joined by three witnesses: Dr Matthew Cole, Postdoctoral Researcher at The Fairwork Project; Dr Asieh Tabaghdehi, Senior Lecturer in Strategy and Business Economics at Brunel University; and Dr Efpraxia Zamani, Senior Lecturer in Information Systems at the University of Sheffield. Dr Zamani, Dr Tabaghdehi and Dr Cole, thank you very much for joining us; I hope my pronunciations were okay there. Good, thank you. To declare an interest first, I should say that I am also chair of the all-party parliamentary group on new and advanced technologies.
As I say, thank you very much for joining us today. I will come to Dr Zamani first and then open it up to everyone. If we look at the bigger picture, what would you say are the biggest economic positives and negatives to the proliferation of connected tech?
Dr Zamani: The biggest positives are the opportunity that connected devices offer in a number of sectors for creating efficiencies—in supply chains, in operations, and being able to develop responsive environments. Other areas of interest are supporting the increases in productivity in various different workplaces. I suppose that there are benefits for the education sector as well. We have seen lots of positive impacts during the pandemic as a result of connected devices. We were able to continue with business as usual with our learning and teaching in universities and schools and so on.
Besides such positives there are trade-offs. There will be trade-offs with negative implications because for all of these connected devices to be useful, operational and functional, they have to collect and process data and at the end of that chain there will be decisions made. It has to do with the quality of the data being collected and informed consent potentially from those using but also non-using these devices.
Q148 Chair: Dr Cole, do you have anything to add to that?
Dr Cole: These are all important points to make. There are also positives in reducing repetitive tasks and freeing up human labour for activities that humans are better at than connected tech. Particularly with new AI and digital automation technologies there can be productivity increases in sectors that previously had been held back. However, this requires industrial strategy that aims to empower workers and enable great dynamism at the human level.
One of the key things that I want to highlight is the risks of the saturation of working life with data and surveillance technologies. Particularly RFID and GPS sentiment analysis of voice or in facial recognition can be used in ways that do not benefit workers and can also infringe upon privacy. These are the main benefits and drawbacks and it is important that we navigate all of those.
Q149 Chair: That is an interesting point. Dr Tabaghdehi, among the wider public, do you think that convenience trumps arguments, the warnings that you heard there from Dr Cole, for surveillance and privacy?
Dr Tabaghdehi: Yes, there are quite a lot of arguments, and researchers saying that the biggest risk related to datafication and digital production is the creation of this big mega data and how we are able to optimise the performance and productivity while taking care of the different aspects of the risks such as privacy. Research supports the argument that privacy is a key area of concern, and accountability and who is responsible for the data being created is another area of concern.
In research that we have done, which has been funded by UKRI, we researched SMEs’ understanding about data and particularly digital footprint data that is being created. Based on the auditing we have done, we could observe that privacy is the issue, particularly when it comes to small businesses. Their digital skills are not good enough and they are not well equipped in terms of education and understanding where digital ethics stands and how they take care of an individual’s privacy when they are holding this large number of data.
Chair: What you mean there about SMEs is that if they are receiving this data, they do not quite know what to do with it. They do not even know what it really is in many instances.
Dr Tabaghdehi: Yes.
Q150 Chair: That is opening them up to what—potential legal action, potential attacks, a cyberattack, for instance, to steal that data and that information?
Dr Tabaghdehi: The key issue is the awareness here. They are not aware. Maybe that happened and they are not aware that that could happen. If they are to be aware, authorities and regulators need to come up with principles in order to inform them first about the consequences and implications of the large mega data they are holding from the consumers, clients and competitors.
Q151 Chair: Is there a monopolistic angle there as well when it comes to larger companies potentially being able to farm this data and information while smaller companies do not have the wherewithal to do it?
Dr Tabaghdehi: Yes, certainly. We have completed quite a lot of research on the large organisations and it highlights that there is risk around the issue of the amount of data that could be held by a large organisation, for instance, Amazon. We mentioned that in our evidence. How will we be able to regulate it when this becomes too large, too big and too fast to be built on, because the data is created every second by every individual? Yes, SMEs lag behind quite a lot and cannot be compared with a large organisation.
Q152 Chair: Dr Zamani, did you have anything to add to that? You seemed to be nodding.
Dr Zamani: Along the same lines. For example, Amazon was mentioned as one corporation that has hold over a vast amount of data. There was a comparison with SMEs. There is a danger there that not just Amazon but corporations like Amazon will continuously increase the amount of data they have and are able to process and therefore conduct market segmentations, develop new products on the basis of these segments and so on. SMEs do not have the processing power that Amazon has, do not have the personnel, the skills or the talent, essentially, to do this processing for data. Therefore, the gap between SMEs and very large corporations continuously increases, leaving a gap somewhere in the middle.
The other thing is that even in cases of SMEs that are quite competent digital-wise—digital start-ups and the like—the data most of the time is held in the cloud and the cloud is held by Amazon. It depends what things the service-level agreements include, what happens to that data, whether Amazon can also take advantage of the data created, collected and processed by smaller SMEs and start-ups.
Q153 Steve Brine: Good morning, and thanks for you time. I want to explore a bit about the workplace and the application of connected tech. We are probably best to start with you, Dr Cole, given what you do and why you are here. Could you proffer some thoughts on the most likely changes in the workplace as a result of the whole connected tech agenda? Are we looking at a fourth industrial revolution here or is that overplaying our hand?
Dr Cole: Yes, that is a question on the minds of many people. In the academic literature there is a debate on whether or not we have a fourth industrial revolution or whether this is a paradigm shift. If you look at the growth of the major tech companies, whether it is Amazon, Microsoft, Apple, Alibaba, any of these huge players, they are supporting an entire ecosystem of AI and data-based and cloud-based companies that are shifting the infrastructures of society and therefore the infrastructure of work, whether in logistics or last-mile delivery or just in jobs like my own where now most of it is done through Microsoft Teams. Whether or not it is an industrial revolution is a bit of an academic question. The effects on work, though, are very clear if you look around.
One of the biggest issues concerns control over data, as the other people mentioned. GDPR provides a certain degree of protection for private individuals. However, it is more limited in protecting workers in the workplace. There are a few provisions there that specifically deal with subject-access requests and protect workers against algorithmic decision-making, like hiring and firing purely by algorithm, but there is a lack of enforcement.
Unless there is a union that is litigating around these things or an existing collective bargaining agreement, there is a lack of enforcement at the state level. The UK Government could do much better at ensuring protections for worker data and protecting citizens of the UK from global giants like Uber, for example. It is one of the most visible ones, but there are many other less visible players that contribute to the data broker economies.
The European data economy is growing massively. There is a project that estimated the different levels and the UK is a leader here. Part of the UK’s industrial strategy is that we are investing a lot in AI. There are many more start-ups here than in other countries. That could be used as an advantage, but we need to also lead on ensuring that tech is used to benefit the most people. That means putting things in place like enforcement mechanisms and data protection and not allowing data brokers to sell on that data.
Several recent studies have shown that even if this data is anonymised—which in most cases it is—or even encrypted, there are ways of reverse engineering that. I read a paper about a new method that found that anonymised data could be reverse engineered to identify 99% of the people. These are some real risks to workers and society at large that we need to have at the forefront of our agenda.
Q154 Steve Brine: Are there any sinister aspects to this in workplace conditions and treatment of people in the workplace? You mentioned union representation, but are there back doors here where people could use this connected tech revolution to take away workplace rights from under people’s noses without them noticing?
Dr Cole: There is ongoing litigation, for example between the App Drivers and Couriers Union and Uber—that was a bit more high profile—the GMB Union and Uber, around what access workers have rights to and what sort of data is collected.
There is a report called “Managed By Bots” that Worker Info Exchange published that is particularly useful, as well as the work that Fairwork, the research project that I am part of, is working on. We have a research project with global partnership on artificial intelligence where we are developing 10 principles for the use of AI in a fairer way or to facilitate fair work. The main concerns are that we have processes and enforceable mechanisms in place that ensure protections for workers.
The scale of this does open up businesses and society to potentially sinister risks. On the news this morning there were many Russian hacks on Ukraine. With the infrastructure that we have so connected, of course there are risks to the more sinister elements.
Q155 Steve Brine: Dr Zamani, it says on your biography that you are currently working with some PhD students around information systems in the work-related context and the social implications of that. Where is the smart money among those PhD students? Obviously not mentioning them by name, but which student is doing something interesting in this space and can you tell us about it?
Dr Zamani: Do you mean connected devices more broadly in this space?
Steve Brine: Yes.
Dr Zamani: That is difficult to say. There are different projects taking place with interesting aspects. One that I will highlight involves marginalised, minoritised social groups and how they relate to connected devices, digital skills and implications of many products and services being offered online.
Marginalised social groups are more likely than others to experience digital poverty, for example, and therefore are unable to access connected devices or services. Are there alternative ways for doing that? That is a question that policy will need to answer. What happens with those who are digitally disengaged not because they lack the skills or because they are digitally poor but because simply they do not trust the data controller and what happens with their data, what happens after they part with the data, where it goes and what are the potential purposes being done with their data?
Q156 Steve Brine: For consumers in customer-facing roles—when you go to the supermarket in the real world as opposed to the virtual world where you are constantly dealing with an automated technological solution—is it inevitable when one is out and about in society that this revolution will take over as the primary interaction?
Dr Zamani: It is very likely. It is very likely that we will see it in retail stores and supermarket stores. Checking out takes place using what is an automated till scanning their products. If I am not mistaken, the UK is probably the country with most online shopping taking place compared to other countries, where they prefer a face-to-face interaction or they want to visit the shop. I think that it will be inevitable, and it will be inevitable that more new cases surface.
We have seen it in other countries, for example in Japan, where they use robotic assistance, human-like but not necessarily, to greet people coming into a building. That is one task that is repetitive and it can be automated without huge negative implications. We will start seeing more connected devices entering the workplace and everyday life.
Q157 Steve Brine: Dr Tabaghdehi, is there anything you wish to add on that, particularly in the workplace setting and the changes that this will bring?
Dr Tabaghdehi: I am sure you know about the example in 2018 that involved a workplace. I will not name it; I have been told that not to name is better. In a warehouse, they introduced a movement-tracking wristband to the workers to be able to track every movement that the worker was making in this large mega company’s warehouse system. The research shows that it impacted on workers’ well-being and that the workers were being treated like robots. They mentioned that they did not feel very well treated.
The purpose of that was to increase productivity and efficiency. Maybe the production was efficient, but the concern is how the workers in workplaces are treated when we are focusing on making everything smart. Perhaps it would be good to have some system in place to balance the employees’ needs against the productivity growth for that organisation. This could be auditing, this could be principles set up, or anything like that.
Dr Cole: I would like to stress the two issues. There is the excessive monitoring that we have brought up but also what the excessive monitoring leads to. Research has shown that increased surveillance and decreased autonomy over the order of tasks or how workers perform their work leads to increased stress and anxiety and we know stress and anxiety can have significantly negative impacts on health.
Another related issue is the constant connectivity. As you are all aware, we have mobile devices where we can be reached at any hour of the day. Certain countries have introduced legislation called the right to disconnect. This would be one barrier to that constant connectivity. There are also opportunities to limit the degree of monitoring, whether it is through monitoring individual movements in what they call digital Taylorism or limiting the time that you can communicate with the worker outside of the workplace. These kinds of barriers can ensure the sustainability of the workforce and better quality of work.
Q158 Steve Brine: Finally, you mentioned health there. Is there any research that you are aware of around the mental health or even the physical health implications of connected tech? If you went back in time 50 years, the absence of constant input—if you go into a railway station you have input from your devices, you have input all around you, constant news input. It is wearing. Lots of research has been done on the quality of sleep and how the quality of sleep then impacts on our physical health as well as our mental health. Have you seen any research around the health implications of being connected?
Dr Cole: Yes, part of a report that I co-authored when I was a post-doc at Leeds for the European Parliament was looking at the effects of digital automation, and this term “constant connectivity” kept coming up in the research and academic literature.
Constant connectivity is associated with negative health impacts; there is evidence out there about that. Also the micro-determination of time and work scheduling has been associated with an extension of the working day beyond contracted hours, beyond official hours, which can then impact your energy and the efficiency of your time when you are supposed to be at work. There is plenty of research on this showing the benefits of a more defined working day as well as a degree of reduced surveillance.
Q159 Steve Brine: What is the optimum time to stop connecting before you try to sleep? It is often said that caffeine drinks three to four hours before bedtime impact on the quality of sleep. What is the optimum? If one was to listen to the latest Celine Dion album within an hour of bedtime, which I think is not unreasonable, would that impact on your quality of sleep? Would you ever sleep?
Dr Cole: It depends on whether you are a Celine Dion fan or not.
Steve Brine: Are you familiar with her work?
Dr Cole: I am, yes; I like the classics. Maybe I will check out the new one. That kind of connectivity is not the connectivity that we are worried about. I listen to Spotify throughout the day, and listening to a soothing Canadian vocalist before you sleep is fine. The issue is more receiving WhatsApp messages from your line manager at 11.00 pm and suddenly being jolted. That stress over the long term can reduce health.
Even before the connected tech, when you looked at shift work and overwork, the constant connectivity, whether it is connected to your line manager or work and even precarious work—we can connect it with the gig economy as well when you are constantly searching for work. There are historical precedents or historical iterations of this pre-connected, but the connectivity makes it so much easier. That is the kind of constant connectivity that we are worried about. Probably watching a Netflix horror film is also not good.
Steve Brine: “A soothing Canadian vocalist”. Chair, I will take it, and leave it there.
Chair: Thank you very much. I would rather follow the Wayne Rooney example and listen to the vacuum cleaner to go to sleep.
Q160 Damian Green: Dr Cole, you clearly have more tolerance musically than many members of this Committee.
In one of the earlier answers you gave you talked about the dangers of algorithmic appointments. It is a small detail, obviously. I want to explore that a bit further because in principle, as long as you have set the algorithm fairly and you have a set of relevant principles for the job, it does not seem to me that that would be worse than human decisions in appointing, where there is a lot of evidence to show that people tend to appoint people wo are just like them. Why would an algorithm be worse than that?
Dr Cole: It is not a question of algorithms versus humans; it is how both can work together. When you have a huge volume of applications, if the algorithm is set fairly you can use that to mitigate against human bias. The problem is that many algorithms are created by humans with biases that they are not aware of. The way that most algorithms tend to be refined is through datasets that largely reflect existing inequalities and existing biases.
There has to be more of an awareness of how to mitigate these things. There is lot of new research from many people who used to work at Google and have left, trying to correct the algorithmic bias. It is not a question of if they are biased; it is a question of how. If we are aware of these we can use humans and machines to work together to try to work towards more equitable outcomes.
One of the barriers to that, though, is the lack of algorithmic transparency. There are existing issues with this, particularly with intellectual property, in that an algorithm is a trade secret. But if it is a trade secret and not disclosed to the public, there needs to be some other way of regulating this—disclosure to governing bodies and enforcement mechanisms to try to mitigate against the risks that can come with bias.
Q161 Damian Green: That is interesting. The unconscious bias that every human being will no doubt have gets reflected in the algorithms. Is that the point that you are making?
Dr Cole: Yes, it has tended to be that way so far. It does not necessarily have to be that way but we need to do more work to change that.
Q162 Damian Green: Thank you. Dr Zamani, we have been proceeding on the basis that productivity growth, which is the holy grail for all Governments, will happen through connected tech. The argument is the balance between that and the human effects of it. Is there hard research to show that the connected tech we have had so far is contributing to productivity growth? Is it showing up in GDP figures yet?
Dr Zamani: I cannot comment on that. “Connected devices” is a label that reflects so much different IT equipment, from smart phones all the way to sensors and activators used for internet of things, solutions for industry 4.0 and 5.0 and so on and it would be very difficult to say yes or no and by how much. We would need to carry out an analysis by segmenting the connected devices label into smaller bits.
How much do smart mobile phones, for example, contribute to productivity gains in an organisation with respect to workers’ connectivity, employees’ connectivity and collaboration and co-ordination and things like that? That is not necessarily how we can extrapolate for connected devices more broadly because there are devices that do not operate in the same way. They are there for automation, and therefore create efficiencies, but they do not work on their own. They will be connected to something else. Therefore, we need to have a more nuanced analysis, a more nuanced perspective to say yes or no.
Q163 Damian Green: That in itself is interesting. We are going headlong on this path and we still cannot answer that basic question of whether it will make us more productive and richer in the long term. That is fascinating.
A related point is Big Brother Watch. We know where it comes from and it is deeply suspicious of the extra connectivity because of the extra instruction it can have. It says that it would be false to say that it inherently increases productivity, because it is relying on the premise that there is a uniform way in which people can work optimally. You are both nodding heavily at that. Is that fair, do you think?
Dr Zamani: Yes, because something that cannot be included directly in industry reports and the kind of measurements that we have is that technology will always be used in new, novel ways as a result of their users’ ingenuity and innovation. When we try to do the assessment of how much productivity we gained from this device or the other system and what have you, we cannot account for all these varied new cases that have not been considered by the technology designer, because it is simply impossible. We continuously appropriate the devices and use them in ways that nobody expected us to do. Quite often we develop workarounds with the system specifications, the security implemented in XYZ device, to increase our productivity. But unless we are purposefully looking for that, we will not be able to measure it.
Q164 Damian Green: Dr Tabaghdehi, you were nodding as well at the Big Brother Watch analysis.
Dr Tabaghdehi: Yes. In my current research that I have done recently related to the digital economy, the growth of the digital economy after Covid, I emphasised that connected technology and smart life did increase production, based on research that has been done. Research in 2021 supports that, saying that one of the reasons is because the consumer shopping pattern can easily be tracked by suppliers. Suppliers know now what is best to produce. Therefore, the rest stage of the production would be a lot less and the production become a lot more efficient.
On the other hand, when it comes to the services, service providers are well connected now to the consumers so they can act a lot quicker, they can respond to the feedback, the complaints and everything a lot faster. This means the relationship between suppliers and consumers becoming a lot more fluid, efficient and more manifold for both parties. Therefore, increasingly, our research said that it impacted on consumers’ experience and did have a good impact on consumers’ decision-making in the end. I strongly believe that digitalisation and connected technology can have an impact on better and more efficient production.
Damian Green: That is interesting. Bringing it into a real-life example, various police services have trialled a mobile health indicator and use of Fitbits and various apps to try to make people take more physical exercise. We have had some written evidence suggesting that this does indeed promote physical activity, as you would expect, but it also promotes psychological issues: people are worried that they are not meeting the new targets they are now being set. Is that kind of using of tech to improve the lives of people sustainable if all it does is make them worried?
Dr Tabaghdehi: Digitalisation moves at a very fast pace. We can see things changing daily. In those applications, even the target will change as well as the application. That is the issue we will always face, but how are we able to tackle and address it? It is inevitable. We are human, we do that. We set the target and when we meet the target we go for the next target and we challenge ourselves, especially when it comes to our health.
I do not know what level would be possible. I do not have research on that but possibly the tech designer could design something that could be more fit for people not to compete too much with themselves. We do not want to encourage self-competition; we want people to be in a healthy environment.
Kevin Brennan: Dr Cole, what professions are likely to become redundant as a result of the development of AI?
Dr Cole: This question of robots taking our jobs has been very popular and something that everyone is concerned about. I do not think that there will be jobs that become completely obsolete. If you look at historical waves of automation, this sort of anxiety has occurred in each new technological revolution. There have been some occupations that have become obsolete but the vast majority of occupations just change the types of tasks that humans do.
Q165 Kevin Brennan: The difference here is that in the past perhaps these sort of technological changes, as occurred in the industrial revolution and caused workers to riot because they thought they were going to lose their jobs, in the long term brought growth to the economy. In this instance the difference, some people have said, is that it is more likely to affect the professions rather than just manual type of work. Is that true and which professions are most likely to be affected, even if they will not become completely redundant?
Dr Cole: That is roughly true because many of the administrative tasks have been automated or able to be outsourced to AI applications and so on. You had a similar wave of automation of clerical work in the 1980s and 1990s. With each introduction of new technology there is a degree of deskilling that is necessary as you break up the complex tasks into simpler ones so that computers or machines can perform them. Yes, we have seen a polarisation of skill. There has been wage growth and an increase in some higher-level managerial positions but also a large increase in lower wage and deskilling of positions.
Q166 Kevin Brennan: To bring that down to a practical level—sorry to cut across you at the end there—in the automation of clerical work you no longer see pools of typists and so on in offices. In our notes there is a list of potential professions that might be impacted—tax preparers, radiologists, paralegals, loan underwriters, insurance adjusters, financial analysts, translators, and even some journalists and software engineers. It does not mention politicians. Do you think that that list is a good one or are there any professions that you would add to that that might be very deeply impacted?
Dr Cole: Based on my review of the literature, that is the general consensus. In a report that I have worked on we specifically looked at radiologists and we looked at some people in the legal profession. Any kind of repetitive task is subject to this new wave of automation. Repetitive, cognitive tasks are the key thing to keep in mind here. That applies to all the occupations that you have listed.
They will not completely disappear. The task composition will change but they will be largely deskilled. With the deskilling they will be more open to competition and flexibility in the labour market, which tends to reduce bargaining power and wages. We need to have mechanisms in place to mitigate that because we are in a cost of living crisis and there are a lot of issues around wages keeping up with inflation, and jobs changing as a result of the pandemic. This highlights another issue.
Q167 Kevin Brennan: Did you want to add anything Dr Tabaghdehi? I saw you nodding.
Dr Tabaghdehi: Yes, you asked which profession to add. I would add the education sector too.
Q168 Kevin Brennan: Are you worried you might be redundant?
Dr Tabaghdehi: I am mainly a researcher, but I am thinking about school or college level. Not primary school because they want to learn social skills, but when it comes to the higher level it is an area of concern. We have been hearing from colleagues, and recently with the British Academy of Management I have been facilitating, along with another professor from Brunel University, a session related to ethics and AI. We could hear from the audience, who were all academics, saying that they are concerned that AI and automation might take their job. This is something I heard in that room.
I want to raise two things here. One is that the education sector is one of the sectors where people are worried about that, concerned about that. We can see how easily people can now get hold of YouTube education on anything easily. It is convenient for a user, for students. But the danger of online education is that we are not developing a society where our children will be well equipped at a higher level or a middle level with social skills. So, I was talking with another colleague about starting research on that perspective. How much of a gap this could create in society will be enormous in future.
Another danger is people who can afford it, with a less amount of money, could go to the digital platform for learning and people who can afford it and are from more privileged backgrounds can attend face-to-face study, because the face-to-face is becoming costly for universities.
Kevin Brennan: Is that what is happening at a global level, in reality—that if you live in India and you are a poor person, the only way you might be able to access it, if you had any access to the internet through a mobile device or something, would be to get that information in that way, but in western society we can still spend £9,000 a year or whatever it is for an undergraduate to go to university?
Dr Tabaghdehi: If they come out with another offer to say that for £2,000 you can get an online education with the same university and same institution, maybe that will encourage people to go in that direction. Then these people who cannot afford it do not get the social skills that we need at every level.
Kevin Brennan: I want to ask something that is related to this but might sound a little bit niche. The Government recently did a consultation and made an announcement of their intention to allow datamining companies access—using AI as well—to the entire music catalogue of composers from the UK to develop their AI and ultimately, presumably, to be able to produce music through artificial intelligence without having to pay the copyright royalty that would normally be required if you accessed and used somebody else’s music.
Understandably, the music publisher associations and the Ivors Academy and other people representing songwriters and composers are concerned about the Government’s decision—which they have not yet implemented but it is their stated intention to do this. Do you have any reaction to what the implications of allowing AI to be used in this way could be more widely, culturally as well as economically?
Dr Tabaghdehi: In particular in the music sector? I do not have any experience.
Q169 Kevin Brennan: Yes, but if there are any other examples you can think of. The implication here is that you could produce artificially intelligently produced music that would theoretically not have a composer. The tech company effectively can mine all of that intellectual property and use it freely without having to pay the original creators. Dr Cole, did you have something you want to say on that?
Dr Cole: There are obvious harms in allowing large corporations to mine public intellectual property or private intellectual property, whether it is music or art. If it is for profit, this is a problem if those profits are not shared with the people who have created that art. If there is a public library, public music catalogue or film, like the BFI film archives—I like to go down there and see what is available—this is a different question. It depends on how it is used.
Q170 Kevin Brennan: A final question, which anybody or all of you can answer. The Select Committee recently paid a visit to Korea—and I am sure we are all shocked by the events that happened there recently—looking at lots of aspects of Korea, including tech as well as the K-wave culturally. We visited the headquarters of Samsung and while we were there they showed us a film of the future home with connected tech and AI. We found the film itself quite sinister.
I asked the executives there what would have been the thing 20 years ago that they would not have predicted but that has happened as a result of the development of this technology. Their answer, which quite surprised me, was the text message. Nobody sent a text on Star Trek. Nobody anticipated that that would be the message—that the digital message would be the incredible change that would come about as part of this technology.
As a group of thinkers on this subject, if you were scanning the horizon, what do you think might be the unexpected item area in the bagging area of the future?
Dr Zamani: That is a difficult question.
Kevin Brennan: I know; that is why I asked it.
Dr Zamani: By default it should be unexpected. If we expect it, it is not unexpected anymore.
Kevin Brennan: What do you not expect then?
Dr Zamani: You mentioned the Samsung case with the home of the future. Connected devices for the last 20 or so years have been used in households, in home environments, where we had, for example, switches on the wall that could be activated by voice or depending on the level of light coming through the window to activate the light or other features. This automation in the home environment will always increase, I suppose, but there are some inherent risks. What happens when the technology fails us? What happens when we can no longer use the switch because the broadband is down or something in the switch is broken and we need to call for a technician, when we are not really sure how to operate that button or what the button really does?
There has been anecdotal evidence, but still evidence, where, for instance, an automated vehicle trapped the driver inside with no way of getting out. What would happen if that driver was involved in an accident with fire surrounding the car and things like that? I am not going to name the manufacturer here.
Q171 Kevin Brennan: You can. You are covered by parliamentary privilege while you are giving evidence to us, so feel free to if you want to.
Dr Zamani: They were talking about the Tesla cars where there are no instructions for how to operate it. If you want to find out how to unlock yourself you have to watch a video on YouTube that is 20 minutes long. If you are in a dangerous situation, you do not have that time. In some cases you may not even have a connection to connect to YouTube or whatever provider that could offer information. That could happen in the home environment too.
Q172 John Nicolson: Good morning, everybody. Thank you for coming in. Dr Cole, could I begin with you? A lot of the members of the Committee are asking questions along the same lines and I am getting messages from my staff saying how fascinating this is; it is always a good sign to see them that perky early in the morning.
Dr Cole, would you characterise the way the big companies like Amazon are using connected tech as empowering for their workforces?
Dr Cole: I would say no. Quite overwhelmingly the evidence shows that the technologies that Amazon uses are not empowering. They lead to overwork, extreme stress and anxiety and there have been issues with joints and health problems. Amazon is not the leader to look to to see how tech can benefit workers. While it is filing hundreds and thousands of patents and experimenting in its giant warehouse labs, I do not think that ultrasonic tracking or any of these digital Taylorist technologies are helping.
Q173 John Nicolson: Why would you want to make your staff miserable?
Dr Cole: Why did Taylor conduct his time and motion studies in the early 20th century?
Q174 John Nicolson: This is a century on, so you would hope that we have become a bit more civilised and understand that people have to be happy to be productive. Is that an incredibly naive suggestion?
Dr Cole: I agree with you. Happier workers are more productive workers, but when you have the level of surveillance and control that Amazon is capable of, it is less concerned with that. There have been marginal wage increases to compensate. Also it emerged in a US context, which is a different labour market. It is also trying to track these things because it wants to automate the jobs, largely. It would be easier for it if it did not have to use humans.
Q175 John Nicolson: That is interesting. It is looking at people and trying to work out how it can upgrade them from human beings to machines, to pick up on Kevin’s sci-fi reference. How can we make our workers into the Borg?
Dr Cole: I am not sure that turning into a machine would be an upgrade for most people.
John Nicolson: Yes; I used that word ironically.
Dr Cole: I know. I think that many of these jobs will be automated and it is fine for them to be automated. Humans do not necessarily want to stack shelves all day but we need to find ways in other jobs for humans to do that they are better at, particularly in care. We have an ageing population in Europe. Reorienting training schemes to put people in jobs where people are better would be a wise move at this stage, in anticipation.
Q176 John Nicolson: Old people do not want it to be automated, do they? One thing we know is that old people like human contact. We discovered that with private beds in hospitals. You would have thought that people would like the privacy but what they missed was the wards and all the gossip and people coming and going.
Mr Green already mentioned Big Brother Watch. I noticed that Big Brother Watch said about this that connected tech intervention in the workplace, “Rely on the false premise that there is a uniform, normative way that people work optimally”. I had to read that three or four times to work out what that was in English, but the bottom line is that people hate being spied on.
Dr Cole: Yes, the recurring theme is the invasive surveillance by these tech companies because we do not have mechanisms in place to protect workers or consumers.
One example that I see a lot of is in marketing firms where the Bluetooth of your phone and other kinds of location data can be triangulated and you will be sent an advertisement based on your searches. Uber is already doing this in the United States. Is this a net benefit? Maybe it is nice to find a pair of trainers that you want.
Q177 John Nicolson: No, it is really, really not. I do not want to be monitored and told what trainers I should be using, speaking personally.
Dr Zamani, I am interested, as you can gather, in Amazon’s use of connected tech. One of my team had a discussion in preparation for this with an Amazon worker in an Amazon-branded vehicle. He seemed to be unclear about the information that was being collected on him and what use it would be put to. Could you tell us the nature of the data collection that Amazon undertakes? Is Amazon different from other companies, competitors, in the way that it uses this, in particular, something sinister called the Netradyne Driveri cameras that are used to monitor the workers and I understand are not pointed out into the road but are pointed from the road at the driver?
Dr Zamani: I can safely say that probably it collects data from anything and everything—the user, the operator but also individuals who are not consciously or unconsciously using an Amazon service or an Amazon product or an Amazon device.
You mentioned cameras at the road or looking towards the driver. Yes, there are ways to adjust the camera, for example, to capture the image the manufacturer or the user would like to have a view of, but at the same time we already see Amazon cameras being hooked on somebody’s door, for example, to activate the ring bell or to increase the sense of security that the inhabitant of the house is feeling about opening the door.
However, these cameras point at the street and capture the image of people who are just passing by with no intent of ringing the bell. There is no way to stop these cameras from collecting your data as you walk by, capturing your image, unless you register yourself as a camera user and tick a box that says, “If you see me passing through an Amazon camera, don’t include my image in your database”. But that means that you are already buying into the Amazon database.
Q178 John Nicolson: Is that any difference to being filmed randomly in the streets by a news crew, for example?
Dr Zamani: It depends. It depends on the purpose and on what happens to the data afterwards. Obviously when you walk through the streets in front of here you do not have any sense that what you are doing will remain public. It is out in the public.
Q179 John Nicolson: But how would walking past an Amazon camera randomly involve any collection of my data?
Dr Zamani: It captures movement so it gets activated.
Q180 John Nicolson: Is that different from a news crew?
Dr Zamani: The news crew hopefully will not want to add my image in a very large database that can be integrated with other datasets and therefore possibly attach other attributes to my image being captured by the cameras about my buying habits, let’s say, or where I live.
Q181 John Nicolson: That is interesting. That was not the way I expected the question to go. The reason why I asked about this camera was that Amazon says that this newly upgraded camera is for the driver’s security. That is why Amazon says it put it into the drivers’ vans.
However, the driver who we spoke to said that what made him feel so uncomfortable was that, as I said, it points not out at the road for his security in case he is involved in an accident, for example, but it is pointing in at him, all the time monitoring him. He said that made him deeply uncomfortable and he could not get an explanation from his Amazon bosses as to why that was happening.
Dr Zamani: Yes, that can be an issue. We already know that Amazon truck drivers, for example, are unable to take reasonable breaks. That camera can possibly track and monitor whether the driver is taking a break, whether he is taking a detour, how he is feeling, his or her emotional state while they are driving down the road. That can be paired with other metrics on, say, how many parcels they have managed to deliver, if they are Last Mile driver, and therefore make an estimation about their productivity, let’s say.
Q182 John Nicolson: Dr Tabaghdehi, everybody wants to know where all this is heading in the long term. Dr Cole mentioned earlier that there could be more automation with deliveries, for example. Sci-fi programmes in the past have always shown driverless cars, haven’t they?
Again, we spoke to somebody who works for a large automotive company that is working on this, and their conclusion is that driverless cars are simply not going to work in the UK. They might work in some countries with huge, big freeways but in the UK with all our eccentric little roads it is just not going to work. It would not work technically and, even if they could make it work, people wouldn’t use it because they would feel so nervous. Is that your assessment?
Dr Tabaghdehi: I have not done any research on that, but I very much doubt it because organisations become familiar with the taste of the consumer, plus the road map, the sizes and everything, and they would design something that would be suitable for purpose in this country.
Q183 John Nicolson: A quick-fire round. As legislators, what would you like us to do if we could do something to make technology less intrusive and less sinister for everyday users. Dr Zamani?
Dr Zamani: I think in legislating it is time to regulate what is happening. There should be a greater focus on what manufacturers do and how they advertise their products and services, ranging from the manuals they produce, the service level agreements, the terminology, but also what happens at the ideation phase of a product or service. Usually they will say, “We have instilled our values in the design of XYZ device”, but are the values British society’s values or those of the manufacturer who may be Samsung or whoever else in Korea? How do we know that these are really informing the design with implications?
John Nicolson: That is a good point.
Dr Tabaghdehi: Our suggestion according to our research was to have a push and pull mechanism, which I put in the evidence as well. A push mechanism could include the development of the relevant guidelines and creation of auditing to observe that all this could happen, such as is the case with Amazon. We will never know if we don’t have anything in place. A pull mechanism could include investment resources to identify and handle the risks that might come up after the use of that becoming too robotic, for example, in warehouses and so on.
Dr Cole: I am glad you asked that question because at Fairwork we have developed a series of policy recommendations specifically on this. First, I think we should observe and monitor AI system deployment. The ONS, for example, should be integrating AI data collection into its datasets, and continue to update regulation with GDPR and data availability or protection. We should also empower labour inspectorates. There was some move to harmonise the different labour market enforcement mechanisms with David Metcalf some years ago.
I am not sure what has been happening with that but because technologies apply in many different contexts, having a single body would help streamline that, particularly with the Health and Safety Executive, which I think has had its funding cut by 50% since 2010. The Health and Safety Executive should play a greater role in the regulation of AI system regulation to uphold standards of deployment.
John Nicolson: Can I pause you there? I think that is a good list for us to work on.
Q184 Dr Rupa Huq: I have some related Amazon questions. It is widely known about workers peeing in bottles and things like that—horrendous things going on. Do you think the public outcry and the fact that there had to be a legal payoff to these workers has meant that Microsoft and Amazon’s use of tech in the workplace shows that there is such a thing as people power, and people can push back when employment rights are being infringed in this way? Anyone can answer.
Dr Zamani: The examples, Amazon and Microsoft, are very large corporations. They have a range of services and based on our research in hybrid work—for example, in digital poverty—in many cases there are no alternatives that can compete on the price. I do not expect that the public outcry will push Amazon to change its practices with its drivers, for example, because, simply, its range of products and services is vast. It competes on price, reducing costs, and for the people who are the most vulnerable the alternative of, let’s say, ordering supplies from a local supplier who has more ethical practices is not a solution. It is not possible.
Even if people stopped buying from Amazon, Amazon right now has such a great range of other things it is doing, like cloud solutions; once you buy into those—Amazon cloud service, AWS—it is very difficult to migrate to another provider, so basically you are locked in. The smaller the business, probably the bigger the challenge to migrate to somebody else.
Q185 Dr Rupa Huq: Are there any examples of where this tech has benefited workers? Is there any upside—that they are liberated, or something?
Dr Zamani: In many cases, people with disabilities or those living in rural areas do not have much scope to find a job where they live. Therefore, being able to use connective devices to secure a previously unavailable job in and of itself is of benefit because they can participate in the economy and find work. Again, that comes with challenges, with risks, because they are not able to develop social relationships that other people who can travel, who live in close proximity to their workplace, can develop with their workers while gathering around the water cooler.
Q186 Dr Rupa Huq: We have all been lobbied about the gig economy and all these things, zero-hours contracts. California has introduced legislation specifically addressing some of these practices. Isn’t it time for the UK to follow suit?
Dr Zamani: I think that there should be regulation because gig workers work basically at the margins. There is a grey area between contract work, zero-hours contracts, gig workers, full-time employees or freelancers. It is very easy for the most vulnerable to simply slip through the cracks of oversight.
Q187 Dr Rupa Huq: I have a couple of questions specifically for Matthew Cole on these platform workers, the middlemen facilitators, like Uber, Just Eat. Is the genie out of the bottle for the impact that these companies have had? I have had dealings with them. There is a union now specifically for these sort of gig economy—what are they called? It will come to me in a minute. Are workers’ organisations, unions or courts catching up with these issues in the platform economy?
Dr Cole: Yes. You have had the gig or platform economy grow from 3% to 10% to 15%, depending on the measures, in the UK over the past 10 years. We have done a lot of work at Fairwork specifically looking at the working conditions of these workers. We are in our third year of scoring 15 different platforms on working conditions, so I encourage you to check out our UK scores in The Fairwork Project.
On regulation and unions, I know that the GMB, the IWGB and the ADCU have been most active in trying to organise these workers. The general consensus is that there needs to be clarification around employment status. I believe that the Status of Workers Bill is on its Second Reading at the moment and this would clarify some of the key ambiguities in the law around worker status.
For those of you not familiar with this issue, the main problem is classification of self-employed contractor versus employee or the middle term, which is a sort of deep-ended worker status. When you have people in the gig economy classified as self-employed, obviously you do not get all the protections and benefits that you would if you are a dependent contractor or an employee. Unions are trying to encourage better health and safety protections, regardless of contractual status. I think one quick fix to this would be to clarify the status in that Bill.
Q188 Dr Rupa Huq: The attempts to have driverless vehicles with food deliveries, can you foresee that happening and wiping away the need for—
Dr Cole: I am of the opinion that I shared earlier that the road system—particularly in London, but generally in the UK—is not very conducive to autonomous vehicles. It is very old and very complex and most of these systems are designed by Americans used to long, straight roads, so I do not see that prospect.
Automatic food delivery at the moment is only really used on college campuses in the US. I could not see that being rolled out any time in the near future. There are other technologies, particularly e-bikes, that we have seen used more prominently and rapid grocery delivery through apps. I think that this will become more part of the fabric of urban life as long as they can be profitable, which is another issue.
Q189 Dr Rupa Huq: IWGB had local issues with Ocado Zoom. A whole load of workers were chucked out and replaced with no notice.
Are there any platforms that you can think of that are demonstrating good labour practice in the platform economy or are some better than others?
Dr Cole: In our scores the platform Pedal Me has consistently scored highest. It is a smaller cargo delivery platform, but it employs workers and it is making strides to have better worker voice and better representation in its platform. The other two platforms that have scored a bit better were Gorillas and Getir, and that is purely because they used standard employment contracts and paid at or above the London living wage. When you have those two things together you meet a lot of the baseline criteria of what we would consider decent work.
Q190 Clive Efford: Following on from the questions you have had about autonomous vehicles, we are told that removing as much as possible the possibility of human error will make driving safer. If we had a fully automated driverless system, would there be fewer accidents?
Dr Zamani: Probably not. Yes, that is a very straightforward answer: probably not. Having a fully automated driving system means that we would have to make great advancement in object detection, for example, with connectivity. What happens when we lose the signal, for example? How will that car or vehicle or whatever it is behave?
There is a very large UK-funded project right now on autonomous transport systems. They are looking into railways, vehicles, maritime, marine applications such as automated vessels. There is quite a lot of work being done there, but in many cases it is simply impossible to capture all the potential eventualities—for example, a child running away from their mother going and on to the road.
Where is the centre of that car? Is the car able to detect very small objects that can be living beings? A lot of work needs to be done to develop this. Even then, technologies are never used 100% the way they have been designed. Technology fails and, therefore, accidents will happen.
Q191 Clive Efford: Therefore, we can dispense with this argument that it would make driving safer?
Dr Zamani: Probably, yes.
Q192 Clive Efford: You make me feel much, much better.
Can we go back to the questions we had about Amazon? Amazon has been criticised for using wristbands to monitor staff movements, for putting cameras in drivers’ cabs. It has even introduced a team member relations heat map to make sure that staff don’t fraternise and organise trade unions, which seems to be a non-tech response. It has even employed Pinkertons, which comes from western movies that I used to watch when I was a kid. Is Amazon pointing the way or are there other companies following this example of this extreme monitoring of staff? Can we start with you, Dr Cole?
Dr Cole: I have not done research on Amazon specifically. I have interviewed Amazon flex drivers, but that is just one part of its widely diverse business model. I think Amazon is an industry leader in many different sectors, whether it is logistics, cloud services or online purchases. I do not think it has a favourable view of labour, and it has demonstrated time and again that it is quite vehemently opposed to any kind of independent collective worker organisation.
It is such a leader and I think many of the tech firms that have started in the US share this sort of Silicon Valley ideology that any kind of worker organisation must be stopped, at least in Europe and the UK. There is a longer history of a more co-ordinated and collectively determined working relationship and I think that the UK would do well to try to encourage that.
I think that Germany, where they have workers on boards and a much more successful industrial export strategy, is the sort of model that we can look towards. Perhaps the way that Amazon is negotiating that and some of these platforms have been forced to comply with existing European labour regulations is a model that we could encourage here.
Q193 Clive Efford: In the absence of any regulation, do you fear that Amazon will set the standards, that other companies will feel that they have to—
Dr Cole: I think if they are given an open door to UK service delivery then yes, and that is a problem.
Q194 Clive Efford: Does anyone else have anything?
Dr Zamani: I agree with this. If it does not get regulated others will follow suit simulating the practices that we have seen in Amazon probably. They won’t be able to do it to the extent that Amazon is, because Amazon is a very big player and every single employee in Amazon is a very small unit, so it has to deal with the differences there. We are already seeing businesses, for example, asking their employees to wear Fitbits and things like that to monitor their well-being, particularly during the pandemic, suggesting that this is something used to promote greater health and there will be rewards. Basically, they were using rewards as an incentive for that but then how do we know that there are no sins: their applications, users of whatever data, businesses are collecting from their employees?
Q195 Clive Efford: It strikes me that these huge tech companies are dominated by individuals. We all know the names of a whole plethora of individuals who run these companies. They get to where they are because they are technically very skilled: they are able to exploit the opportunities that the internet and the web offers them. That does not necessarily make them the right people to be employers or making major decisions.
Does it concern you that these individuals get so much wealth and power through these tech giant companies that they are able to set up and relatively quickly become very rich and powerful? Do you have any views? That is more of a philosophical question.
Dr Tabaghdehi: Certainly, with technology, when it comes to the platform, the users are the ones giving power to the platform owner. For example, the more you are a user of a certain platform, that platform will earn more at a macro level in the bigger picture.
When a platform is providing a type of service or a product that is convenient and good for the consumer, the consumer will go for it. They will purchase it. Therefore, we need a level of regulation that could be spread across different producers. Having a very powerful production in one organisation can never be healthy and beneficial for consumers, employees or workers.
How could we bring in a mechanism so that the production of a certain product does not become monopolised by a high-tech company? We need to support small businesses. That could possibly be another mechanism to help them, to encourage small businesses to do at least part of the services they provide. Having help for different services could be one of the ways of creating small, individual competitors that all belong to the hub so that, as a community, they have some power to compete with the larger organisation.
Dr Cole: There is a lot of research around platform governance. I just wrote a paper on the infrastructural power of platforms. If you look at the largest tech companies, they are either providing cloud services for the internet, so all of our connected devices can operate or Amazon largely hosts most of the web presence, or you look at Facebook or any of the media or Google. These are essentially information and communication technologies that are main infrastructures for the way that society runs.
Should these be privately held and controlled by a single person? I don’t think so in a democratic society. We need to see them as public resources, not unlike water, electricity and gas, and I don’t think they are going to go away. The fact that they are largely controlled by a single person and largely rivalling much state power in how they are governed is a serious issue and will continue to be an issue into the future. I think that we need to anticipate that and what the implications are.
Q196 Julie Elliott: I want to follow up on what you said, Dr Cole, about Amazon and trade unions. Why do you think an organisation like Amazon, which you have done some work on, is so opposed to having a reputable trade union in place when there is a huge amount of evidence that if you have a recognised trade union in a large workplace, that workplace is more productive? You have a happier workforce and you have many fewer days lost to sickness. Why do you think it is?
Dr Cole: It is a question I ask myself often. I think it is a political question. I think it is a business culture of the United States and Silicon Valley that does not understand the benefits of alternative labour management. Well, not “alternative”—they are quite mainstream labour management strategies, particularly in Europe. There is a sort of unwillingness to entertain the benefits of trade unions and a co-ordinated labour market strategy.
Q197 Julie Elliott: Do you think that at the top level they don’t understand how trade unions and companies work in this country?
Dr Cole: If you look at what books and what the CEOs of many of these companies actually say—like Peter Thiel, who is essentially for a sort of CEO dictatorship, does not believe in competition and thinks monopoly is far more efficient—there is a specific kind of culture there that discourages a more collaborative and productive labour management and business strategy.
Q198 Julie Elliott: Do you think that the current way that companies like Amazon treat warehouse workers and drivers is a blueprint for how tech might change the nature of work for everyone else?
Dr Cole: Yes, I guess they are a blueprint in a way, but there are already living examples. It is hard to anticipate exactly what new innovations will be, because the information and communication technology has saturated our working lives and our non-working lives.
With the introduction and the spread of AI and datafication there are many known unknowns with this. Tech moves much faster than regulators, as we have seen. Although there are pieces of legislation that are catching up, I think it is very worrying that there is not as much attention given to the potential risks and harms that I have highlighted.
Q199 Julie Elliott: Do you think that workers effectively surrender their data rights in smart workplaces?
Dr Cole: Do you read the terms and conditions of every app that you use? I don’t and most people don’t. If you tick that box essentially you surrender your rights. I think that there should be greater education in the public about this, but also you could introduce a type of regulation that requires companies to make your rights more intelligible.
We have the GDPR and sometimes people click through and click “reject”, but there is a lot of fine print there and most people don’t do that. If you changed it to an opt-in type of system or you had greater scrutiny for health and safety, like I mentioned for labour market regulation, you could mitigate against the risks that signing over all of our data presents.
Q200 Julie Elliott: Would anyone else like to comment on that?
Dr Tabaghdehi: There is research done by Durham University and Surrey University, which has been published. They talk about this blueprint for smart cities. They say that cities need to become smarter, but their recommendation is that they need to bring all their stakeholders together at the time that they want to do technological advancement and more innovation. In that case, consumers and employees are some of the stakeholders as well. They all come to the table and see how the development could happen with technology. It could be very important and they could have some say in designing the tech and so on.
The other thing they mention and emphasise is that it is good to have technology, smart devices and connected devices in place, but it is good not to forget that the goal and the purpose is to create a human-centric ecosystem. We do not want to create a robotic ecosystem. That was very interesting for me. I said that we need to have a principle in place where even high-earning companies or large companies or organisations follow those principles and guidelines. The system needs to be humancentric and robots and all this machinery are to help humans to deliver the task, not only for maximising productivity. When that becomes the key agenda, humans will not be the key focus of the organisation, and we see all of these consequences with Amazon.
Giles Watling: Thank you, doctors three, for coming today. It is good to see you, although slightly alarming.
Dr Cole, first of all, you mentioned a while back that you were working largely through Microsoft and Teams. A lot of us do, especially since the pandemic. People have found different working practices. We are changing the way of working. In your work, people work from home, possibly from a home office or even in a bedroom and then, after the day’s work, look at a screen or engage with people but through a screen. Have you looked at the mental health aspects of that?
Dr Cole: My research in particular does not look at the mental health impacts of screen time, although I am aware of the research out there that there is a general correlation of lack of social and human contact time, reducing happiness and well-being. I think all of us are quite sick of lockdowns and I am sick of working from home, but I think that there are benefits on an environmental scale and with home care work to having a more mixed office and home office type of work arrangement.
Q201 Giles Watling: Is this something that employers should get involved in, in your view?
Dr Cole: I am not sure. What aspect do you want—
Giles Watling: Looking at the mental health of their employees.
Dr Cole: Yes. I think it is a mutual obligation. I am obligated to do my best to perform my duties and the employer should also look after my capacity to do that, and mental well-being is of the utmost importance. I think that there has been a lot of increased attention on this in the past few years, especially in the wake of the pandemic, and that has been a good thing.
Q202 Giles Watling: The connected tech is not helping?
Dr Cole: Connected tech can help by enabling working from home. The constant connectivity and not being able to disconnect from work has been shown to have negative impacts. Screen time in total I think is not a very good measure, because you can be watching and enjoying a film with your family or monitoring performance stats for 12 hours. I think it depends on how you are using it.
Q203 Giles Watling: Thank you. Dr Zamani, I saw you nodding away.
Dr Zamani: I have been doing research on remote work and hybrid work for a few years now. As the previous speaker said, it is not so much about the length of time being connected but what you are connected to and what you are doing that has the negative implications.
When we are continuously connected to our workplace, by a device or a multitude of devices, the organisation sort of has a tether to our private life as well. The boundaries get blurred quite easily. When working from home we have multiple roles: I am a mother; I am an educator; I may be caring for somebody else; I am also working. Therefore, connected devices are continuously interrupting what I am doing, for work or personal purposes, and will have a negative implication for my well-being. I cannot talk about mental health but definitely the balance between work and life.
The other thing I want to highlight—and this is coming from our habit work project—is we tend to talk about people and workers and employers, but these are not homogenous groups. I have a social network and a family and, therefore, while working from home, while being connected to my workplace and things like that, I have a social relationship, the human touch. Once I finish working 9 to 5, I go back to my family and engage with them. We are still connected to devices but we use them for different purposes most of the time.
In other cases, people are living on their own—single people—and we saw how much they struggled during the pandemic. Being able to establish a social relationship and having the ability to use these connected devices to reach out was a window to the outside. Therefore, it is not “one size fits all”. It will not have the same implications for all people, for all individuals, for all workers. There are different approaches we can take to have an informed opinion.
Q204 Giles Watling: As you say, we are blurring the lines between work practices. I would not mind betting that people in this room have been doing social media at the same time as working here, and that is what we are doing increasingly. Sorry, I meant to ask Dr Tabaghdehi, do you have a comment on that?
Dr Tabaghdehi: No.
Q205 Giles Watling: That is all right. Dr Cole, taking it a little bit further, you were talking about excessive monitoring. There is a move in the National Health Service to monitor cases, people with dementia and so forth, but there is a concern that people are encouraged to disclose health information to their devices. Where do you stand on that? Is this something that we should be alarmed about?
Dr Cole: It is a negative experience for the patient if algorithms substitute for the specialist knowledge of a nurse or a doctor. I haven’t done research on connected devices in the NHS in particular, but I have my own experience with trying to see my GP. It is largely mediated through a private app that I don’t know anything about. It is called Dr IQ. It makes it very difficult for me to actually speak to a GP. I often don’t get to speak to anyone at all or it is a nurse.
Obviously, there is a labour shortage in the NHS at the moment, so tech can fill in and help with some of those aspects, but I think generally it is not delivering the care that we need. There are risks when private companies collect all the data, and beyond the risks I think there is a public good question about whether our health services should be—well, whether private companies should profit from health services or if we should take what would be that profit and reinvest it in better technology, better equipment and better training for our workers.
Q206 Giles Watling: Do you believe that the delivery of public services—particularly public services under strain, such as the NHS and social care—can be helped by the use of connected tech?
Dr Cole: I think it is an opportunity cost, so if we are using apps and we are spending money on private developers or private companies who have developed the apps, what could we be using that money for instead? Would it be better invested in training and skills programmes for British workers? At the moment we are still sourcing many nurses and doctors from abroad, because there is a shortage. I think we have to weigh the benefits and the cost of using tech simply for its novelty.
Q207 Giles Watling: Do you think that public services should be driving this innovation, or should they be holding back to wait to see what the private sector delivers and then use the stuff that works?
Dr Cole: There can be a mix of both but, if you look at the history of how the most impactful technologies have been developed, they have largely been publicly funded programmes, like moon shots or like I think the US military developed many of the infrastructures of the internet initially.
When you invest public funds in a public service you will be able to use those returns in ways that I think are more efficient. When you have five different companies all trying to develop the same thing and failing, it is a not very efficient use of resources.
Q208 Giles Watling: That is very interesting. Any other comments?
Dr Zamani: If we rely on the private sector to lead that innovation, experience shows that the private sector does not tend to operate by supporting inclusivity, equality, offering their services for the public good at reasonable prices. Therefore, there is a danger that that innovation may happen but it will be very costly with negative implications. The public sector needs to set the standard for what the thing looks like, who needs to have access to it and at what price.
Q209 Giles Watling: Thank you. Dr Tabaghdehi, do you have anything to add?
Dr Tabaghdehi: Yes, very much so, with agreement of what Dr Zamani said. The private sector needs to be observed by the public sector, I would say, for the social and ethical implications that it might have, such as inclusivity, equality, diversity and information transparency, which is another agenda: such as fake news and so on.
Giles Watling: Thank you very much.
Chair: That concludes our session today. Dr Cole, Dr Tabaghdehi and Dr Zamani, thank you very much indeed.