Written Evidence Submitted by Karan Tripathi[1] and Dr Maria Tzanou[2]
(GAI0047)
This submission addresses the first two issues raised in the call for evidence (a) how effective
is current governance of AI in the UK, and (b) What measures could make the use of AI more
transparent and explainable to the public?
The submission addresses both these issues focusing on a gender perspective and drawing upon research undertaken within the Leverhulme Trust funded project on ‘FemTech Surveillance: Gendered Digital Harms and Regulatory Approaches’.
This submission makes two key interventions. First, the current governance of AI and the existing regulatory framework in the UK are inadequate to address the gender-related risks and harms of AI datafication. Second, any regulatory response must take due account of the disparate impact of AI on individuals based on their gender and ensure an egalitarian AI framework that is rooted in social-justice. It must prioritise meaningful participation of affected stakeholders in the design, training, governance, auditing, deployment and use of AI-enabled information systems.
The existing framework for AI governance does not provide adequate protection from gender bias (Wachter-Boettcher, 2017) that might potentially lead to gender discrimination (Perez, 2019). It lacks transparency (Parra et al, 2022) and fails to account appropriately for gender-diversity in the ways women and queer individuals are made visible, represented and treated as a result of algorithmic predictions arising from their digital data (Wellner & Rothman, 2020). Due to the gendered prejudices in the training data, or bias in design and programming, AI systems can expose women and queer individuals to stigmatisation (UNESCO, 2019), employment discrimination (Brown, 2021), misdiagnosis (Perez, 2019), gender-based violence (Sovacool et al, 2021), and other harms.
The UK National AI Strategy does not explicitly address the risks of gender discrimination in its “pillars” of AI governance. The UK’s data protection regime is inadequate to mitigate gender-based discrimination in AI-based technologies. We foreground four major concerns. First, the UK GDPR (following the EU GDPR) maintains gender neutrality, and does not extend special protection to processing of data concerning gender. This leads to exclusion of gender-based algorithmic exclusion from the special protection which aims to limit the possibility of automated decisions, including profiling (González Fuster, 2020). Second, the consent provisions, albeit strengthened, do not sufficiently address the higher risk posed to certain social groups by the surveillance of gender-based personal data. Third, the focus of the UK data protection regime on individual rights fails to deal adequately with the potential collective, and often invisible, gendered harms of surveillance of gender-based algorithmic processing of personal data (Tzanou, 2022).
Ensuring gender justice in AI governance requires a ‘gender-sensitive’ and not ‘gender-neutral’ regulatory framework. Further, algorithmic design and development must move beyond the male and female binary, and acknowledge gender as fluid. This would require algorithmic governance to conceive gender as elective and self-determined, rather than ascriptive. Therefore, we call for an egalitarian AI framework which provides protection from both direct and indirect gender-based discrimination of AI. This requires all stages of AI governance - design, programming, deployment, and auditing - to be rooted in values of social justice and intersectional feminism.
Recognising Structural Inequalities: gender-based discrimination should be seen not as isolated but as intersecting with other forms of structural inequalities - race, ethnicity, class, nationality, religion, (Tzanou, 2020). It must acknowledge that AI datafication can have an aggravated gendered disparate impact in marginalised and/or vulnerable social contexts (Tzanou, 2020). Further, ostensibly neutral AI policies must not neglect how structural inequalities both inform and are constructed by the design and implementation of AI systems.
Inclusive Representation: Development of ‘anti-bias algorithms’ requires inclusive and diverse representation at the design and development stage. Representation from across the gender spectrum is required to ensure that machine learning is guided by the lived-experience and rationalities rooted in feminist and queer perspectives.Rather than focusing solely on market solutions, public funding should be also allocated to initiatives that allow for a system co-design between developers and users that reflects the latter’s (some of whom might be survivors of gender or sexual violence) needs, expectations and interactions with AI-based tools (such as menstrual apps, vagina wearables, etc). Further, extending diverse representation to auditing stages can also enhance proper categorisation of, and elicit adequate redress rights for, often invisible gender-based harms such as misgendered identification and collective rather than individual harms (Tzanou, 2020).
Transparency: Gender issues should be at the centre of the discourse on algorithmic transparency. Gender-sensitive transparency requires openness not just about data collection and annotation (Zou and Schiebinger, 2018) but also about developers’ and deployers’ guiding logic (Wellner & Rothman, 2019), rationalities, and choices at every stage of design and implementation.
(November 2022)
References
Brown, E.A. (2021) The Femtech Paradox: The Workplace Monitoring Threatens Women Equity. Jurimetrics, 66, pp. 289-329.
González Fuster, G. (2020) Artificial Intelligence and Law Enforcement: Impact on Fundamental Rights. European Parliament Think Tank Study.
Parra, C.M. et al (2022) Likelihood of Questioning AI-BasedRecommendations Due to PerceivedRacial/Gender Bias. IEEE Transactions on Technology & Society, 3(1), pp. 41-45.
Perez, C.C. (2019) Invisible Women: Exposing data bias in a world designed for men. Random House.
Sovacool, B.K. et al (2021) Knowledge, energy sustainability, and vulnerability in the demographics of smart home technology diffusion. Energy Policy, 153.
Tzanou, M. (2022) ‘Modern Servitude and Vulnerable Social Groups: The Problem of the AI Datafication of Poor People and Women’. IEEE Technology & Society Magazine, pp. 105-108.
UNESCO (2019). I'd blush if I could: Closing gender divides in digital skills through education.
Wachter-Boettcher, S. (2017) Technically Wrong: Sexist apps, biased algorithms, and other
threats of toxic tech. W. W. Norton & Company.
Wellner, G. and Rothman, T. (2020) Feminist AI: Can We Expect Our AI Systems to Become Feminist? Philosophy & Technology, 3, pp. 191–205.
Zou, J. and Schiebinger, L. (2018) AI can be sexist and racist -- it's time to make it fair. Nature, 559 (7714).
[1] Research Associate, Leverhulme Trust funded project ‘FemTech Surveillance: Gendered Digital Harms and Regulatory Approaches’. Email: k.tripathi@sheffield.ac.uk
[2] Senior Lecturer in Law, University of Sheffield, Principal Investigator (PI) of Leverhulme Trust funded project ‘FemTech Surveillance: Gendered Digital Harms and Regulatory Approaches’. E-mail: m.tzanou@sheffield.ac.uk