Blog

Ethics in Tech

The Truth Machine

03 Nov 2021

Some years ago, I read The Truth Machine by James L. Halperin. It’s set in a futuristic world, where violent crime is rife. A new machine that can detect lies with 100% accuracy makes for the speedy execution of criminal justice. One opportunity to tell the truth. No appeal. Immediate judgment!

I can’t resist a sci-fi reference when discussing artificial intelligence (AI). But I was reminded of The Truth Machine by a recent conversation. A friend of mine (who works in technology sales) was telling me about a system that analyses the words and facial expressions of people taking part in market research.

This issue is that people can self-censor, say what they think they’re expected to say or otherwise give a misleading response, for whatever reason. Not knowing which responses to trust is the problem that the system aims to solve. Now, this doesn’t seem very far away from being a kind of “truth machine”.

Machines v. humans

Yet, we’ve become used to machines analysing us: Our website activity, shopping and viewing habits and social media interactions. And we benefit from better online experiences, like viewing suggestions from our favourite streaming service.

We accept automated decisions when, for example, applying for financial products. This seems OK because we rationalise that these decisions are based on factual data about our circumstances. Although, information about our behaviour (activity levels, driving habits) can now be used as a factor in calculating premiums, if you subscribe to the tech.

But it seems to me there is a difference between this kind of analysis and being observed and scrutinised by a “truth machine”. Think about the hiring process for example. How would you feel about being interviewed on camera by a machine that’s been trained to analyse your responses and engagement, in order to pick the perfect candidate? With so much being done online, human interaction has all but disappeared in some contexts. Is that a good thing – and is there a limit?

For example, you may have read about the latest robot nurse, Grace. She takes your temperature and monitors your responsiveness using a thermal camera. The makers say that, with human-like facial expressions, Grace will provide interaction for those that are isolated and reduce the burden on front-line healthcare staff. On the face of it, this all seems great. But I do wonder at what point technology becomes overly intrusive and impersonal. Which interactions and decisions should be reserved for humans? And can decisions by machines be trusted?

Bad data, bad decisions

Undoubtedly, machines can be just as good as people at making real-time decisions based on vast quantities of data, whether that’s analysing brain scans or playing a game of chess. But even machines have to learn, and systems are only as good as the data they’re trained with. There are well-publicised examples of systems making bad decisions because they’ve been primed with historical data reflecting bias or discriminatory practices that we’re more conscious of now. Machines learning the mistakes of the past – or simply not knowing what they haven’t been taught.

Indeed, bias in data, concerns about transparency (how decisions are reached) and accountability for systems present a range of legal, privacy and ethical issues that aren’t easy to solve or legislate for. Many organisations have begun to reassess their wider ethical approach in this digital age. The underlying issues – fairness of outcomes and respect for fundamental human rights – are not new. Standards of professionalism and conduct that have stood the test of time may well be a good starting point.

In our 100th birthday year, Hymans Robertson is undertaking work to articulate and build on our existing ethical approach for the digital age. I hope you can join us in our upcoming event to explore some of the issues we’re considering.

Subscribe to our news and insights

0 comments on this post