Written evidence submitted by the University of Nottingham (ADM0018)
- Decision Making is different to Decision Support.
Specifically, in many contexts, from policy and planning (e.g., flood risk planning), to industry (e.g., investment and insurance) and the military (e.g., IED detection), algorithms are extremely widely used as Decision Support tools – while algorithms for decision making are comparatively rare.
Decision support algorithms process information, often from multiple different sources (incl. quantitative sources such as sensors, and qualitative, such as expert knowledge) in order to generate a human readable output. It is then the human – the decision maker, who, on the basis of the decision support, makes a decision.
The important point is that Decision Support algorithms are in practice more widely used than Decision Making algorithms, yet, they suffer from many similar problem, including:
- Common lack of communication of uncertainty to the decision maker, risking uninformed decision making. I.e. most data are uncertain, and keeping track of and communicating this uncertainty is vital, in particular when combining different data.
- Common lack of communication of modelling assumptions which may bias the decision support, again, risking poor decision making.
- Non-transparency of decision support systems and their outputs. I.e. Decision makers cannot interpret whether system outputs ‘make sense’ or how they are derived, risking a lack of validation and trust.
Thus, decision support systems may for example result in a human decision maker making biased decisions without knowing it.
Recommendation: It is important for discussants to understand the difference between automated Decision Support and Decision Making and I would recommend a separate discussion of Decision Support systems.
- The second point I wanted to make is to highlight a small set of US literature which captures key challenges in the practical deployment of decision making algorithms – such as the challenge of biased decision making based on algorithmic learning of biased samples. These may be useful in case they have not been considered already:
- Solon Barocas and Andrew D. Selbst. Big data’s disparate impact. California Law Review, 104(671):671–732, 2016.
- Federal Insurance Office. Report on protection of insurance consumers and access to insurance. US Department of the Treasury, 2016.
- Maureen K. Ohlhausen, Terrel McSweeny, Edith Ramirez, Julie Brill. Big data - a tool for inclusion or exclusion?, Federal Trade Commission, 2016.
October 2017