● Filament is a UK-Based Machine Learning and Artificial Intelligence Consultancy that offers cross-industry advice and consultation on the use of machine learning techniques for use cases including automated and semi-automated decision-making.
● We are excited about the benefits that the use of algorithms offer to society and believe algorithms are going to play an increasing role in the coming years to help humans make better decisions.
● We understand that there is a common fear amongst lay people that machines will take their jobs and aim to educate that this is not necessarily the case. As we continue to observe that humans and machines work best hand in hand.
● We acknowledge and support the development by the UK government of regulations regarding algorithms in decision-making processes to ensure a steady and safe growth of their use in accordance with the current data-driven trend.
● It is undeniable that the automation of human tasks presents tremendous societal benefits for both private and public sectors. Recent applications have shown great progress in human augmentation such as making medical professionals more effective at tasks such as early stage disease diagnosis.
● The growing use of algorithms and the scale of their tasks also increases the consequences of a possible failure; potentially leading to mechanical failure, financial loss or biased judgments (e.g. influencing political elections or legal verdicts).
● Filament believes that the use of algorithms in the private and public sectors will grow exponentially in the coming years. It is, therefore, necessary to establish the legal bases and principles for using them safely and responsibly. With a regulated and transparent framework, the use of algorithms will be highly beneficial to society.
Filament considers that:
● Contrary to the perception of an algorithm as a neutral mathematical concept, an algorithm when combined with underlying data is a social construction that reflects the biases of its contributors.
● That if algorithms are not used responsibly, especially in sensitive sectors like finance, insurance, law, recruitment, they can lead to the infringement of human rights.
Filament acknowledges that bias:
● is inherent to algorithms and necessary for it to be able to generalize beyond training data.
● Can take several forms: technical through data collection and the choice of the algorithmic formula, human as the assessment of the system’s performance is subject to an operator’s subjective judgment.
Given the aforementioned risks, Filament recommends:
● Promoting wider education around algorithms as the use of data science grows, so does the ‘black box’ vision of algorithms, it is, therefore, necessary to make sure that the contributors and users know that their algorithms are likely to reflect human prejudices.
● Encouraging the development of ‘best practices’ established and ratified by a consortium of developers and users and that these ‘best practices’ should apply to the entire construction process of the algorithm - from its formulation to the analysis of the results - and to all actors involved.
● That all major players should support and contribute to initiatives like the IEEE project “Algorithmic bias Considerations” that aims at creating a certification to spread the use of unbiased algorithms.
The current situation regarding the use of algorithms is the following:
● Some decision-making algorithms currently used have a black box approach and the user will not be informed as to why the decision has been made (for example, algorithms with complex architectures such as neural networks output results that can hardly be anticipated and justified by humans).
● Nevertheless, considering the consequences automated decisions can have on people’s lives and society there is a definite need for transparency and justification in how these decisions are made.
● These risks have been addressed by incoming EU regulation, which is due to come into effect in May 2018, makes a first step towards containing these risks by stating that citizens have the right to understand why a decision was made when made by a machine[1].
Suggested solutions include:
● Making decisions more transparent by combining the algorithms with decision trees. Using these trees in combination with the algorithms will allow for explanations as to why the decision has been made.
● Making the code available to end-users: some algorithms are already open source, and whilst a lay human may not be able to understand how this algorithm comes to a decision a programmer would likely be able to be. Although, there is an issue with complete transparency as an algorithm could be a company's IP and main source of revenue. Potential solution could be that companies reveal what technique they are using to come to a decision i.e. say an artificial neural network has been used.
● A potential solution could be to “audit algorithms” as it seems relevant that external bodies should be involved to guarantee the decision-making processes integrity.
● There is currently research going into making neural networks give reasons as to why a decision has been made. More resources should be put into this research to enable transparency.
Overall it’s apparent that companies in both the public and private sectors could be more transparent with how a decision has been made and this in turn would make the algorithms more accountable.
2.3. The implications of increased transparency in terms of copyright and commercial sensitivity, and protection of an individual’s data:
● Certain pieces of software are powered by algorithms which are unique and have been created to serve a purpose/solve a problem. By being overly transparent there is a danger that some companies will expose their intellectual property.
● Some algorithms are patented such as the LZW compression algorithm meaning that the creator/business is protected in terms of intellectual property. However some patented algorithms are used without any money/fee going to the patent holder. For example, the Gif image format uses the LZW compression algorithm however many people still use it without paying royalties as it has become the de facto method for sharing animations on the Internet.
● It is important to consider that algorithms and its supporting data are separate and an algorithm’s code could be made available for audit/inspection without exposing any of the implementing company’s underlying sensitive data.
● In the near future, legislation should emphasize input data transparency and multi-agents supervision in input and output analysis.
● Filament supports the initiative of regulating the use of algorithms as stated in the EU General Data Protection 2016, especially the right “not to be subject to a decision based solely on automated processing”.
● Filament recognizes and supports the need for an extended legislation concerning automated decisions in the UK regardless of Brexit.
[1]http://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32016R0679