Ethical AI: A call for transparency
21 Dec 2018
In a previous blog post, we explored the need for a better understanding and awareness of the challenges around ethics and biases. And there are a number of bodies looking seriously at these issues; for example, the Ada Lovelace Institute and the Alan Turing Institute. There was a series of workshops held earlier this summer between the Royal Statistical Society and the Institute and Faculty of Actuaries, on the industrialisation and professionalisation of data science.
But looking ahead to 2019, how else might we look to tackle some of these issues?
It is worth noting that much of the public debate tends to focus on data privacy and consent, but it is equally important for there to be discussions around methodology and assumptions.
If you have some spare time over the Christmas period, I would recommend getting hold of the book ’Weapons of Math Destruction' by Cathy O’Neil, or having a look at some of the talks by Rachel Thomas at fast.ai.
But in order to build confidence and trust with the wider public (who will be on the receiving end of predictions made by these models), we should expect companies implementing these data-driven decision-making solutions, particularly in the public sphere, to seek out challenge and scrutiny from both the public and the wider community.
When a colleague and I had an opportunity to speak at the EARL conference earlier this year, what struck me most, about this and many other conferences across the data science community, was the willingness of people to come together and share openly their experiences and working practices.
This is not about giving away intellectual property: in the vast majority of examples, the specific technical detail of a chosen algorithm is not the source of value for how data science and machine learning are benefitting organisations (we are not, for example, typically talking about quantitative hedge funds that depend on the smallest of edges to be profitable).
And whilst there are sometimes arguments that we should be slightly cautious of too much transparency, in general it should be clear that erring on the side of publishing methods and methodology is the only available approach if organisations wish to be seen as ‘trusted advisors’.
To that end, I think that in 2019 we should look to community-driven efforts to play an increasing role in helping to ensure an appropriate culture of transparency and openness about the ethical deployment of AI.
We've a dedicated group on LinkedIn which focuses on the issues that affect our society and on the actions we should be taking to help future generations. Please do get involved. You can join the conversation here.