Trust AI Enough to Realize its Full Potential?

Related: Future Proof Your Portfolio with Robotics and AI

Trust and responsibility: The ethical dimension

There is, of course, another side to this, and that is related to trust. Particularly in the public sector, funded by taxpayers, your customers need to trust the results of your analytics, and especially the effects on their lives. In the 1990s and 2000s, the Metropolitan

Police in the UK used statistics to justify stopping and searching a disproportionate number of young black men, on the grounds that they were more likely to be involved in knife (and other) crime. Trust in the Met dropped hugely when this became public knowledge, particularly in London, and there was a big community backlash. It may have been justifiable in terms of the statistics, but not in terms of the outcome.

Public sector organisations planning to use analytics need to remember that the end does not justify the means, and the means must be clear and transparent.

Many organisations need to share data with partners to maximise the effect of analytics. Sharing personal data without permission is not permitted under GDPR, but it may be worth considering the potential to share insights. It is, after all, reasonable to assume that those who default on their tax payments may also default on other types of payment too.

Will this sharing, however, be acceptable to customers? This is a conversation that needs to be expanded. Public sector organisations have a responsibility to protect and safeguard public money by reducing fraud and nonpayment, but they also need to be trusted to do that in an ethical way.

The debate about how to achieve these aims is by no means over, but it is clear that analytics has an important part to play in the process. And with machine learning already being rolled out, the broader implications of how algorithms learn and the impact of AI will dictate more consumer education.

This article has been republished with permission from SAS.