We welcome the opportunity to respond to your Committee’s call for evidence on how representative democracy can be supported, rather than undermined, in a digital world. Microsoft is one of the world’s leading digital engineering businesses and operates a hyperscale cloud business that supports AI innovation. Established in the UK for over thirty years, today Microsoft UK employs some 5,000 people, including over 150 world class computer scientists at our research lab in Cambridge, and works with over 25,000 partners across the country.
General
Digital technology has impacted every part of our lives. From delivering personalised recommendations on music or film streaming services to helping us book our next holiday or register for recycling services from the local authority. This digital revolution has democratised access to information that twenty years ago was only available to a small section of society. This process has also opened up the political process. At the time of the 1997 general election the UK had three 24/7 news channels and a number of local and regional media outlets. The 2017 general election campaign was covered by traditional media and fought in the digital space with all campaigns making use of targeted advertising, search engine optimisation and social media to both communicate directly with voters and to solicit donations. Whilst we would not comment on whether this has been a net positive or negative, it is worth noting that all technology divides opinion and has potential to be used for positive or negative ends.
Microsoft is not a social media company, although we offer services such as LinkedIn or Gitthub, that share some of the characteristics of social media platforms. Data driven and automated technologies are being developed and used across a number of Microsoft’s products, from Azure Cognitive services, through our search engine Bing to the xBox games platform. These technologies can be used separately or combined to yield systems that perceive, classify, predict, or otherwise reason in an automated manner. Microsoft has been developing and embedding an ethical framework in our AI development that stresses that AI systems should treat all people fairly (fairness); empower everyone and engage people (inclusiveness); perform reliably and safely (reliability); be understandable (transparency); be secure and respect privacy (privacy and security); and should have algorithmic accountability (accountability).
Transparency and accountability are responses to the challenge of how to engage with the public before deployment of algorithms in decision making. System explainability is an essential element in building public trust in AI, especially when AI is deployed in situations that may impact people’s lives. Many of the most powerful techniques currently available, such as deep learning neural networks, are often opaque, even to the engineers who created them. It is critical, therefore, to develop effective and trusted ways to describe these systems in a manner that is meaningful and understandable, but without compromising system quality and accuracy, or exposing trade secrets.
Just as today when consulting a doctor over a medical issue, or a lawyer over a legal challenge, we can seek a second opinion or redress when something goes wrong, in the world of algorithms knowing who is accountable when something goes wrong is equally important. Maintaining public trust will require clear line of sight over who is accountable for an AI’s operation in the real world. It is vital that all organisations, from business to government, academia to civil society, are aware of the potential ethical issues raised by AI and have appropriate processes in place at all stages of design, development, and deployment. This will go a long way towards addressing accountability concerns.
Misinformation
In April 2018 we launched our Defending Democracy Programme. As part of this activity, Microsoft committed to individual and sector-wide actions to play our part in tackling misinformation online and those who seek to distort democracy through disinformation. As part of this work, in the summer of 2018 we entered into a partnership with NewsGuard Technologies, to empower voters by providing them with high-quality information about the integrity and transparency of online news sites.
NewsGuard employs conservative and liberal media analysts who review online news sites, compiling their findings across a series of nine journalistic integrity criteria into a “Nutrition Label” which then map to a Red/Green Reliability Rating. The objective is not to preclude access to any news content but to empower readers with additional information on the source and reliability of that content as they consume and/or share it. The NewsGuard service is available through downloading a free add on for the Microsoft Edge browser, as well as for the Google Chrome and Apple Safari browsers.
Technology and democratic engagement
The productivity gain that modern cloud based technology can offer should be taken up by Parliament and Government to drive better data insights for policy makers, and to allow members of Parliament to better serve their constituents whilst in Westminster.
In 2018 we launched the Defending Democracy Programme. The programme works with stakeholders in democratic countries globally to protect campaigns from hacking through increased cyber resilience measures, enhanced account monitoring and incident response capabilities. As part of the programme we brought our Account Guard service to the UK in autumn 2018. Account Guard is offered on a non-partisan basis to all U.K political parties registered with the electoral commission at the national, devolved, regional and local level, political technology companies, and political non-profits, who are current O365 customers at no additional cost to their existing service with Microsoft.
Political organisations that register for our Account Guard programme can apply the service to both the organisation and personal email accounts of staff members, employees, board members, consultants, advisors, volunteers, interns, and other associates who choose to participate. This allows Microsoft’s global security centre to notify participating entities of attacks by nation-state attackers that we track in a unified way across both organisational and personal outlook.com and hotmail.com email systems, and to then work to defend against an attack or help the organisation or individual secure and recover their system following a cyber-attack.
The Defending Democracy programme also looks to explore technological solutions to preserve and protect electoral processes and engage with national, devolved and local officials to identify and remediate cyber threats; and defend against disinformation campaigns in partnership with organisations such as NewsGuard, academic institutions and think tanks dedicated to countering state-sponsored computational propaganda and disinformation campaigns. The recent paper by the Centre for Data Ethics and Innovation on the threat posed by AI and deepfakes highlighted one of the disinformation challenges posed by AI, and this is a issue that our company president, Brad Smith, addresses further in his new book ‘Tools and Weapons.’
We look forward to continuing to work with the Committee on this enquiry.
Yours sincerely
David Frank
Government Affairs Manager
Microsoft UK
3