Written Evidence Submitted by

Dr Steve Rolf, Research Fellow at The Digital Futures at Work (Digit) Centre at the University of Sussex Business School

(GAI0104)

 

 

Bio

Dr Steve Rolf is Research Fellow at the ESRC-funded Digital Futures at Work Research Centre (Digit), University of Sussex. His research examines the impact of digital platforms and technologies on work, employment and society. This evidence draws on his ongoing research examining the introduction of new Provisions governing the use of algorithms in China.

 

1.    Summary

 

1.1    This evidence provides a short overview of new regulations governing the use of algorithms introduced in China in March 2022. It addresses the questions:

 

 

It also highlights some potential impacts of these regulations in relation to workplace surveillance and management algorithmic technologies, including in the platform economy.

 

1.2   The Chinese regulations constitute a significant attempt to legally enforce algorithmic regulation and transparency, introducing compulsory reporting requirements for firms using AI-powered algorithms and giving users rights to opt-out. The regulations significantly exceed directives in other advanced economies. While AI Governance in Europe has tended to focus on individual rights, aspects of the Chinese 'social' model also attempt to address societal-level harms. While there are important differences between the UK and Chinese economy and society, the Chinese regulations point to a number of important questions about how AI and algorithms can be regulated in the public interest.

 

2.    What lessons, if any, can the UK learn from other countries on AI governance?

 

2.1     China has become a global leader in AI development and governance. It produces more and better-quality AI research and patents than any other country, while ranking second only to the US in capital investment and AI startups.1 While China’s internet governance is distinct from that of the UK, there are nevertheless important lessons UK policymakers can draw from its experience.

 

2.2       The Chinese Internet Information Service Algorithmic Recommendation Management Provisions (‘Provisions’ hereafter), introduced in China on 1st March 2022, go significantly beyond any existing directives elsewhere. Algorithms are automated recommendation- generating and/or decision-making pieces of code. Empowered by artificial intelligence (AI),


1 Benaich, N. & I. Hogarth. 2022. State of AI Report 2022. https://www.stateof.ai/.

 


algorithmic recommendation and decision-making systems are being increasing developed and deployed across a wide range of spheres and are therefore important aspects of AI Governance. The new Chinese Provisions cover virtually all forms of recommendation and decision-making algorithms and amount to a sweeping effort to regulate the use of AI-based 'algorithmic recommendation services' across society. They have the potential to impact sectors from news, social media and e-commerce to fraud prevention and platform work – all areas where algorithms, powered by AI and ML, are increasingly used to take operational decisions and make recommendations.

 

2.3      China’s regulatory strategy is substantially driven by rivalry with the United States for technological supremacy in digital technology and AI. These regulations, therefore, should be seen within this context of China’s State Council's ambition to make China a global leader in AI by 2030, in both commercialisation and in defining technical and ethical standards. China's current Five-Year Plan aims to increase the value-added of the digital economy to 10 per cent of GDP by 2025 and to this end commercial AI applications have been successfully embedded in areas like smart cities programs, education and healthcare, and autonomous vehicles.

 

2.4    Evidence is accumulating that strong regulation and governance of AI can advance, rather than hinder, technical developments in the field.2 A popular misconception is that China is advancing in the field of AI due to its weak regulatory environment. On the contrary, China has demonstrated a willingness to aggressively curtail AI applications it deems socially harmful – such as social media algorithms which amplify misleading content and those which direct delivery riders and cab drivers to work damagingly long hours while taking risks on the road to meet targets. By restraining AI investments in such applications, the government is confident it may guide AI development toward socially beneficial ends – such as (for example) smart manufacturing, medical diagnoses, and transport systems.3

 

2.5    The new Chinese Regulations include:

 

 

2.6    In line with China's previous regulation of the internet to maintain internal social stability, there are also requirements for algorithms to 'present information conforming to mainstream values', to 'prevent or reduce controversies or disputes', and avoid publishing 'fake news'. While

 


2 Mazzucato, M., M. Schaake, S. Krier & J. Entsminger. 2022. Governing artificial intelligence in the public interest. In UCL Institute for Innovation and Public Purpose, Working Paper Series (IIPP

WP 2020-12), Ada Lovelace Institute. 2021. Regulate to innovate: A route to regulation that reflects the ambition of the UK AI Strategy. https://www.adalovelaceinstitute.org/report/regulate-innovate/.

3 Wang, D. 2022. 2021 letter. https://danwang.co/2021-letter/.

 


such directives are plainly not suitable for the UK context, they do demonstrate how states may offer considerable oversight of algorithmic deployments.

 

3.   Potential impacts of the provisions in governing algorithms relating to work and the platform economy

 

3.1     One area in which the Chinese Regulations are likely to have a significant impact is in relation to work, both in China’s platform economy and more broadly. China has the world’s largest platform workforce, both numerically and proportionately. The Provisions pose deep challenges to workplace surveillance and management algorithmic technologies. These are widespread in China – not only for monitoring productivity and performance by measuring keystrokes, communications, and breaktimes, but also for functions like automatic deductions from bonuses for measured infractions of discipline and the use of ‘wearable’ technologies to monitor downtime, bathroom breaks, and physical and emotional responses to work. Such technologies are also widely used in the UK and Europe, with little clarity over their legality so far. Transparency reporting is likely to open such technologies up to public, worker, and trade union scrutiny.

 

3.2   Following publication of the first draft of the articles (and in response to recent media criticism and a government communique) Chinese platform food delivery company Meituan published the principles of its delivery scheduling algorithm, explaining how it generates four delivery time estimates and selects the longest of the four in order to minimise driver stress. It admitted that issues remain with the algorithm and requested feedback for further improvements.4 Thus, the Provisions may prompt many companies to move towards greater transparency and user involvement, even without direct enforcement action.

 

3.3       The Provisions are also likely to have important impacts upon workers and working conditions across the economy by addressing algorithms that may unfairly suppress competition. For example, Article 15 bars the use of algorithms to ‘uphold monopolies or engage in unfair competition’. If enforced, greater interoperability is likely to tilt platform ecosystems in favour of smaller players, by – for instance – removing bans on SMEs selling goods across multiple e-commerce platforms and enabling multihoming. This could in turn potentially reduce price pressure on SMEs by enlarging markets, and benefit platform workers by ensuring they are not deplatformed for working with multiple firms. This is a stronger version of provisions within the EU’s Digital Markets Act, which identifies (hyperscaled) ‘gatekeeper’ platforms and defines limited obligations for them to ensure interoperability with smaller competitors (e.g., enabling cross-platform messaging). Currently, the UK has not explored similar provisions.

 

4.    The EU approach

 

4.1  Regulation in the EU is also evolving. The key acts are the Digital Services Package (DSP; published Spring 2022) setting out limited regulatory access to and oversight of platforms' algorithms, and the draft flagship AI Act, (AIA) published in April 2021 and still under discussion. In contrast with China, draft European regulation maintains an emphasis on upholding fundamental individual rights including privacy, ethical decision-making and data security. The draft AIA, for instance, includes outright bans on decision-making algorithms in


4 Ma, Q. 2022. Meituan Waimai discloses its rider delivery time algorithm for the first time, and launches multiple measures to implement 'algorithm picking up'. In iNews. https://inf.news/en/tech/f800c0fef39b5919b06e0be62a2edc6a.html: iNEWS.

 


cases where they pose a threat to ‘safety, livelihoods and rights of people’. While this places important emphasis on individual rights (somewhat absent in China's approach) it does little to address societal-level harms. An illustration of such harms might be impacts on democratic processes — for example, algorithmic recommendations on social media platforms that discourage wavering voters from turning out, thus tipping the balance in an election.

 

4.2. It has also been noted that the AIA and the DSP fail to focus on employment and workers’ collective rights. They currently prioritise individual protections on data and against discrimination over, for example, rights to ensure technology does not undermine the spirit of collective agreements or ensuring worker input into algorithmic management systems.5

 

5.      What measures could make the use of AI more transparent and explainable to the public?

 

5.1     The Chinese Provisions highlight that measures taken in other countries may impact on the transparency of AI used in the UK. The Provisions mandate reporting of all major algorithms in use for transparency reasons. To facilitate this, China’s Cyberspace Administration (CAC) began compiling a public-facing online catalogue of commercial algorithms in use by internet giants like Tencent and Baidu in August 2022. As of November 2022, the list features 100 algorithms, lists their proprietors, and outlines their key functions.6 This raises a number of issues for UK regulators. UK internet firms operating in China are likely to be affected by these transparency requirements in the future. Because the UK has no analogous reporting requirements, this could lead to a situation where UK firms’ operations are more transparent to the Chinese government and public than they are in the UK. Similarly, Chinese firms such as TikTok may continue to operate in the UK with lower requirements for transparency over the operation of their recommendation algorithms than in China.

 

 

6.      How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

 

6.1  The UK has so far shied away from creating new institutions or designing new processes by which to govern AI applications. In contrast, China and the EU are increasing investment in state capacity and expertise. China’s Cyberspace Administration is the country’s dedicated internet regulator. It has taken a lead in designing and implementing digital governance measures during recent years, including China’s Cyber Security Law and its Data Security Law. It designs regulations through collaborations with other ministries. The Provisions were issued by the CAC and eight other central government ministries. Further sector-specific guidance is then issued through bilateral ministerial guidance documents. This mirrors EU moves to hand oversight of the new DSP regulations of the tech sector to the European Commission, which is in the process of developing extensive in-house technical knowledge through hiring and training in order to manage enforcement.

References

 


5 Del Castillo, A. P. (2021) The AI Regulation: entering an AI regulatory winter? ETUI Policy Brief, 2021-06.

6 The full list of algorithms is available on the CAC website: http://www.cac.gov.cn/2022- 08/12/c_1661927474338504.htm

 


Ada Lovelace Institute. 2021. Regulate to innovate: A route to regulation that reflects the ambition of the UK AI Strategy. https://www.adalovelaceinstitute.org/report/regulate- innovate/.

Benaich, N. & I. Hogarth. 2022. State of AI Report 2022. https://www.stateof.ai/.

Del Castillo, A. P. (2021) The AI Regulation: entering an AI regulatory winter? ETUI Policy Brief, 2021-06.

Ma, Q. 2022. Meituan Waimai discloses its rider delivery time algorithm for the first time, and launches multiple measures to implement 'algorithm picking up'. In iNews. https://inf.news/en/tech/f800c0fef39b5919b06e0be62a2edc6a.html: iNEWS.

Mazzucato, M., M. Schaake, S. Krier & J. Entsminger. 2022. Governing artificial intelligence in the public interest. In UCL Institute for Innovation and Public Purpose, Working Paper Series (IIPP

WP 2020-12).

Wang, D. 2022. 2021 letter. https://danwang.co/2021-letter/.

 

 

(November 2022)