Building Trust in AI

This week I sat down with a friend who’s been working in AI for the last 25 years. He was generous to give me a couple of hours of his time for just the cost of a cappuccino!

Here’s a few things that struck me about our conversation:

  • Many of those who are sounding the warning alarm around the risks of AI are not luddites or pessimists. Rather it’s those who’ve been working in AI for a long time, who have built the systems and understand the dangers.
  • There are lots of things within the field of AI that even the experts don’t understand or at least can’t explain. This makes it difficult to predict future outcomes and harder still to reverse negative effects.
  • The tremendous potential for good in AI will provide little compensation if the worst fears are realised!

Moving forward we will need to consider:

  • How do we protect ourselves from bad actors who deliberate seek to employ AI for harm?
  • What ethical frameworks can we agree to adopt globally, and what frameworks may we want to adopt unilaterally – simply because it’s the right thing to do?
  • How can we better understand AI and can we teach AI to explain to us how and why it works?

This 3rd point is vital – “Explainable AI” – the key word often used to describe “XAI” is TRUST. If we want AI to be good news for humanity and its benefits to be adopted, then we need to build trust that it’s working for our good.

Here’s a helpful primer from IBM on “XAI”.

Leave a Reply

Your email address will not be published. Required fields are marked *