AI Pause

In March 2023 an open letter was written, now signed by over 30,000 people, calling on all AI labs to pause the training of AI systems (more powerful than GPT-4) for 6 months. The rationale was to use this time to develop a set of shared safety protocols.

You can read the open letter here – Pause Giant AI Experiments: An Open Letter

Among its key signatories were; Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Max Tegmark.

Whilst no formal ‘pause’ has taken place, it’s certainly started the conversation. Here, in a conversation hosted by Reuters; Max Tegmark, Stuart Russell and Gaia Dempsey consider the ongoing need for an ‘AI Pause’.

I’ve already argued for the need for more people to be involved in AI ethics, not least pastors and theologians. Dempsey favours a multi-displinary approach:

We need a much more multi-displinary, democratic style discussion around the future of AI systems and how we want them to work.”

Tegmark also recently spoke to the Guardian on this topic:

“AI-focused tech firms locked in ‘race to the bottom’, warns MIT professor”

AI Girlfriends

As we know any new technology has the power to do good, whilst at the same time will inevtiably used for the creation of p*rn and the exploitation of women. AI it seems is no exception. Some have argued that AI could create “victimless” p*rn, though I strongly suspect that ignores the impact on the consumer.

Freya India writes persuasively about the impact that ‘AI Girlfriends’ might have in her essay ‘We can’t compete with AI Girlfriends.

A chat bot that talks intimately might seem attractive to the lonely – but could this really a subsitute for human interaction? India highlights the risks; objectification of women, unrealistic expectations of relationships, fueling image insecurity for women.
Is there any hope in this… India’s hope is that AI leads us to miss and crave humanity again!

The only faint glimmer of optimism I can find in all this is that I think, at some point, life might become so stripped of reality and humanity that the pendulum will swing. Maybe the more automated, predictable interactions are pushed on us, the more actual conversations with awkward silences and bad eye contact will seem sexy. Maybe the more we are saturated with the same perfect, pornified avatars, the more desirable natural faces and bodies will be. Because perfect people and perfect interactions are boring. We want flaws! Friction! Unpredictability! Jokes that fall flat! I hold onto hope that some day we will get so sick of the artificial that our wildest fantasies will be something human again.

AI in Warfare

In his second Reith Lecture in 2021, Stuart Russell addresses the theme of AI in warfare.

The first thought is this is a red-line not to cross, of course we wouldn’t want to create a robot army… but of course ethical dilemmas aren’t easy.

  • Would battles fought between robot armies result in less human deaths?
  • Would Autonomous weapons allow for ‘surgical strikes’ against key leaders, again to minimise the loss of life?
  • If assasination by drone is already considered legitimate (it’s certainly a MO of the US military), why would we rule out the use of micro drones?

Aside from these one can also imagine humanitarian and defensive usage to predict the actions of an enemy.

It seems to me that the biggest single red-line here is to give AI the decision making power to take a life. The ethical case against these kind of weapons is made powerfully at Lethal AWS. Their short film ‘Slaughterbots’ below, reminiscent of the Black Mirror episode ‘Hated in the Nation‘, makes a powerful case for not crossing some basic red-lines.

What does it mean to be human?

That’s the refrain of Prof. John Wyatt – he argues that the emergence of AI is a revolution event which requires us to ask some fundamental questions about humanity.

This wide ranging discussion with Glen Scrivener, of Speak Life podcast considers some of the following topics:

  • How do we learn? Is there a difference between mammalian intelligence and machine inteligence?
  • What is language and how do we use it?
  • What is a person? How do we train AI to understand and appreciate personhood and the distinctive nature of humans? How do we avoid machine learning human bias?
  • What kind of society do we want to live in? How can human life flourish? What’s the end-game of AI?

I love Wyatt’s thought on the future… if for many the hope of AI is a “techno-utopia” which removes ‘friction’ in life – how does that square with the reality that so much human growth and progress (at both macro and micro levels) has come through ‘friction’.

For Wyatt, as a Christian, a key word for him is ‘redemption’ – how can we extract the benefits and mitigate the risks to truly use AI for ‘good’.

John Wyatt published his book on AI in 2021 – ‘The Robot will See You Now’

Theologians and AI

Yesterday, there was a facinating discussion on the future of AI. Elon Musk sat down with Benjamin Netanyahu, Max Tegmark and Greg Brockman.


The spread of AI scenarios on offer range from ‘Human Extinction’ to ‘Paradise’ – quite a variance!

There’s some interesting discussion early on about ‘Building Trust’ – could AI be constrained by ‘Formal Verification’, to prove itself to be ‘good’? There also seems to be a genuine desire to see power and the benefits of AI shared across society – “to see all boats rise”.

But deeper questions develop.

Musk asks:
– “Is there work in Heaven?”
– “What could be bad about living in Paradise?”
– “Will death be a choice?”

So much of our language around these questions are not just steeped in philosophy and ethics, but very often, specifically Christian ethics.

Might it be, that theologians and pastors have wisdom to share here? Far from being anti-progress or anti-science, Christians have been at the forefront of Western revolutions in science and technology across the centuries. Why not this one?

I think ‘Theologians’ might be able to help with some of these BIG questions!

Building Trust in AI

This week I sat down with a friend who’s been working in AI for the last 25 years. He was generous to give me a couple of hours of his time for just the cost of a cappuccino!

Here’s a few things that struck me about our conversation:

  • Many of those who are sounding the warning alarm around the risks of AI are not luddites or pessimists. Rather it’s those who’ve been working in AI for a long time, who have built the systems and understand the dangers.
  • There are lots of things within the field of AI that even the experts don’t understand or at least can’t explain. This makes it difficult to predict future outcomes and harder still to reverse negative effects.
  • The tremendous potential for good in AI will provide little compensation if the worst fears are realised!

Moving forward we will need to consider:

  • How do we protect ourselves from bad actors who deliberate seek to employ AI for harm?
  • What ethical frameworks can we agree to adopt globally, and what frameworks may we want to adopt unilaterally – simply because it’s the right thing to do?
  • How can we better understand AI and can we teach AI to explain to us how and why it works?

This 3rd point is vital – “Explainable AI” – the key word often used to describe “XAI” is TRUST. If we want AI to be good news for humanity and its benefits to be adopted, then we need to build trust that it’s working for our good.

Here’s a helpful primer from IBM on “XAI”.

Living with AI

18 years ago I was starting a Computer Science course at the University of Sussex… it doesn’t feel that long ago, but things have changed so much. We were getting excited about ubiquitous computing and wearable tech back then, which is now just normal life.

I recently listened to the 2021 Reith Lectures with Stuart Russell. Over four lectures Russell addresses the topic of “Living with Artificial Intelligence” – specifically, the challenges and opportunities of General Purpose AI.

What struck me most is the potential of AI to radically transform any and every area of life. If this is possible, then AI is no longer simply the domain of coders and mathmos, but will need diverse, multidisciplinary skills to shape it for good. An essential field, perhaps overlooked in the excitement of innovation, one which has been part of my work and study over the last decade is ethics, so I was pleased that this was a key discussion point for Russell.

But I wonder, like Russell, if we are in danger of too quickly leaping to the (very real and rewarding) opportunities of AI, before we build an ethical framework to mould and goven it. Killer robots and built-in bias aside, I’m more concerned AI will make us lazy and the impact that will have on how we view work and creativity!

I’d love to think some more about AI ethics… who should I be reading / following to deepen my understanding? Grateful for your thoughts and recommendations…

Stuart Russell