A case for ethical guides
Navigating biases in machine learning
14 Jun 2018 - Estimated reading time: 2 minutes
The attention being paid to Facebook and the Cambridge Analytica scandal is but one more example of the increased focus on the ethics and biases behind the influence of decision support analytics in our lives. Specifically on machine learning algorithms where the complexities can mask the understanding of biases within either the underlying data or of the people responsible for constructing these algorithms.
These issues were highlighted in the recently published report by House of Commons Science and Technology Committee, entitled ‘Algorithms in decision-making’. And the Nuffield Foundation's recently launched Ada Lovelace Institute, which is being established to ‘ensure the power of data, algorithms and artifical intelligence is harnessed for social good’.
The dramatic rise in computing power and data science breakthroughs have the potential to bring significant benefits to society:
For example, within healthcare we have seen the application of machine learning to develop an algorithm for identifying cardiac arrhythmias, using data collected from a simple wearable heart rate monitor.
Not only was the performance of the resulting system better than the average cardiologist, the potential for applying the system to a continuous stream of data from a wearable monitor could in itself make a real difference (identifying arrythmias that might otherwise be missed if they didn’t show up when the patient happened to be undergoing a test).
Within local government, examples of the benefits of applying these techniques include better targeted interventions for social housing tenants at risk of falling into arrears.
And yet, the downsides from the application of machine learning algorithms can be devastating both for individuals and groups within society:
A high profile example of bias was the application of algorithms in the sentencing of individuals, taking account of a prediction of the likelihood of reoffending.
Biases can also be exacerbated through the ongoing use of a system, where new data exhibits a bias that the system learns from. An obvious example here would be the idea of social media feeds informing what news articles someone might be interested in: the resulting ‘information bubble’ means that people rarely come across contrasting points of view.
Other examples include the frequency (or lack thereof) with which high-paying jobs might be advertised to women compared with men.
More generally, making use of an individual’s health data has been suggested as a way of improving the price of insurance for the customer, and a way of managing payouts for the insurer. But what impact might all of this have on the societal value of insurance? How granular should the data be interrogated before we worry about the consequences for ‘pooling and sharing’?
Here at Hymans Robertson, we apply algorithms in various areas, including providing recommendations as part of our Guided Outcomes defined contribution platform. We certainly recognise that the world has changed / is changing. We no longer need to rely on ‘guardians’ of algorithms and decision support analytics. Nevertheless, the landscape is complex and it is clear that, as a society and as users of these algorithms, we will require ‘guides’ to help us navigate our way through the ethics and biases that are out there.