Here’s a wide ranging interview with philosopher Daniel C Dennett. In discussing his memoir, the interview takes a detour to talk AI. Dennett freely admits he is an alarmist, but worries there are plenty of causes for alarm:
The most pressing problem is not that they’re going to take our jobs, not that they’re going to change warfare, but that they’re going to destroy human trust.
Trust is of course hard to build and easy to break. Do we trust AI developers? Do we trust governments or business in their AI safeguards and ethics? Might our trust be lost when we realise we’re speaking to an agent and not a human?
Dennett develops his argument further in his recent piece for The Atlantic – ‘The Problem with Counterfeit People.’ He not only notes the dangers of uncontrolled AI, but considers how the collapse of trust will damage society.
He proposes the outlawing of AI being allowed to impersonate a human (noting the irony of the early aims of AI – to beat the ‘Turing Test’). To safeguard this he recommends AI being ‘water-marked’ as such, so that it is unable to impersonate.
I have lots of sympathy with Dennett’s proposal though I would want to ponder:
- Are there any ethical uses of AI where it is broadly beneficial for AI to at least mimic and model some humanity eg. in healthcare or education where that sense of ‘humanity’ is helpful to the task
- Given his worldview, I take it that Dennett approvingly references Dawkin’s ‘The Selfish Gene’ – given that, why should we be overly protective as a species? Why not let AI evolve?