The Robot will see you now

This book is probably the best place to start for an introduction to Christian theological engagment with Artificial Intelligence and related emerging technologies.

It’s not a book so much as a collection of essays on a breadth of topics, brought together by editors John Wyatt and Stephen N. Williams. Though each essay is individual and deliberately not forced into following an arc, the book does divide naturally into three parts; historical and cultural background and analysis, theological frameworks, and ethical and social issues.

Despite individual authors and light touch editing some consistent themes emerge across the chapters and they focus on the unique nature of being human:

  • What does it mean to be human?
  • In what sense do we bear God’s image?
  • How does the rise in AI mimic or challenge our humanity?
  • How should humans and ‘robots’ interact?

Of course, a huge number of other areas are discussed, but so often they return to the question of humanity. Healthcare, sextech, creativity, work, surveilance – these practical topics build naturally on the theological frameworks which argue for human uniqueness and thoughtful engagement with technology.

This book will be one to return to again and again, not to read cover to cover, but to come back to particular chapters, either for building theological frameworks or hearing again considered responses to challenging ethical issues.

Remaking the World

It’s not often I read a book that I genuinely enjoy. I tend to read for information, certainly not for fun. But Andrew Wilson’s ‘Remaking the World’ really was fun! It focuses on the year 1776 – indeed it’s subtitle is ‘How 1776 Created the Post-Christian West’. Wilson noted the sheer number of history defining and shaping events which occurred in this single year, across the world, but especially in Western Europe and North America.

It’s fun, because it’s full of life, curiosity, little known facts about well-known people and you sense that Wilson had lots of fun researching and writing this book. The book weaves its way through the contours of Enlightenment, Exploration and Invention, considering history, geography, culture, philosophy and religion. Wilson approaches big questions and sensitive topics in a spirit of humility and thoughtful enquiry, not settling for simple answers, while helpfully distilling and summarising bigger arguments.

In the context of thinking through technological ethics, I was especially drawn to his chapter entitled ‘Machines’. As you walk through the Science Museum (London or Manchester) or perhaps the National Railway Museum in York, you don’t really experience how world changing these steam engines or cotton mill machines were. Wilson shares vivid descriptions of a newly industrialised North of England – “For every Paris… there was a Manchester”. He quotes from the travel writer Alexis de Tocqueville with this damning verdict:

humanity attains its most complete development and its most brutish; here civilisation works its miracles, and civilised man is turned back almost into a savage.

Here he captures perfectly the progress and regression, the blessing and the curse of technology. Wilson concludes the chapter with that both/and conclusion of technology, using Shelley’s Frankenstein to illustrate:

Today, for every mile of railway track in the world, there are a hundred AK-47s… industrialization (like fire) is a Promethean triumph as well as a Frankensteinian monster. Which of the two metaphors predominates – whether running water compensates for Hiroshima, or antibiotics for the Somme, or the growth in wealth for the overweening pride that accompanies it – probably depends on where you are sitting.

There is a pointed thought here for those developing new technologies. Do the benefits outweigh the curse? Triumph or Monster? And who gets to decide which it is?

Christmas Lectures 2023

The Royal Institute presented their Christmas lectures in the year just gone on the theme of Artificial Intelligence. The 3 hour-long parts are available on iPlayer.

Presented by Professor Mike Wooldridge, of Oxford University, these are not so much lectures as interactive science classes with the audience of school children very much a part of the learning experience.

It’s fascinating to see the building blocks of what it might look like to teach AI in a school setting and to see how the general public can be informed about the benefits and dangers of machine learning technologies. It’s a great watch and a real masterclass in making complex ideas simple and understandable.

What will your countervailing power be?

It was an honour to be invited to attend the 43rd Annual Conference of the British Computer Society’s Specialist Group on Artificial Intelligence earlier this month. I was invited to take part in a panel discussion with Prof. John Naughton, Prof. John Stevens and Dr. Giovanna Martinez. Here’s the rough text of my closing statement:


Over the last year I’ve been following developments in the world of AI with great interest and it’s clear that the potential for good is enormous – as is the potential for harm.

A simple example of this can be found in the area of voice generation; it was a delight to hear again the classified football results read by the reanimated voice of the late James Alexander Gordon, less good was the emergence of deep fake audio, most notably of Sir Kier Starmer. The same technology used for good and for ill.

We’re told that AI will be transformative in so many areas:

Computer vision will transform medical diagnosis… and will be used to automate surveillance societies.
Natural language processing will improve real-time translation… and be used for industrial scale plagiarism.
Robotics will scale productivity… and will leave us with huge questions about the value of work.
Machine learning will lead to technological advances… and will lead us to doubt the uniqueness of humanity.

Change is coming – how do we ensure it’s for good?

Find a countervailing power that gives you a strong enough framework to shape AI for good the good of all.

I think, I’ve found one… what will yours be?

How do we ensure that AI brings change for the good?

It was an honour to be invited to attend the 43rd Annual Conference of the British Computer Society’s Specialist Group on Artificial Intelligence earlier this month. I was invited to take part in a panel discussion with Prof. John Naughton, Prof. John Stevens and Dr. Giovanna Martinez. Here’s the rough text of my opening contribution:


As my biography notes, apart from some brief undergraduate studies, I have no technical expertise in the field of Artificial Intelligence, so I have a deep respect for many of you here who are indeed experts in your field and have been pioneering this science long before it was vogue.

I believe it’s now uncontroversial to argue that this field needs to become multi-disciplinary, such is the potential impact of new technologies on society. Though, I suspect at this stage you may need some convincing of the value of Christian theology within the conversation.

What does it mean to be human? What is the purpose of work? How can I build an ethical framework? How can society flourish? What is a compelling vision for the future?

These are the kind of high-level questions that AI pioneers are beginning to ask, or if not, should be asking. They are also questions that theologians have been pondering for hundreds of years.

Christians have been at the forefront of societal revolutions in the past; they have been artists and authors, musicians and medics, and of course scientists. They have often helped to build the structures to support revolutions; committed to education, health services and care of the vulnerable. Earlier today some of you went on the ‘walking tour’ – try and walk 100m in Cambridge without seeing the revolution that is Christianity at work – as historian Tom Holland notes, we are inescapably “Christian”.


The AI revolution is incredibly powerful – so what countervailing power will keep it on track? What will make it good?

In recent weeks we’ve seen that neither nation states nor large tech companies have all the resources to manage this well. Do we have confidence that OpenAI or indeed Rishi Sunak are working for the good of all. Will a board of trustees, state regulation or global treaties be enough to ensure a good outcome?

I would contend that Christian theology offers a more robust countervailing power than these, because it is unswervingly committed to human flourishing over and above self-interest.

You’ve no doubt seen ‘The Trolley Problem’ – there’s a runaway tram heading down a track with a fork in it. The tram is about the kill 5 people on the track, or if you pull the points lever just one person will die. The ethical dilemmas become more complex but are essentially they’re asking two questions; 1) Should I intervene by pulling the lever or doing nothing? 2) Which outcome is the least worst?

It’s quite fun, in a slightly dark way, whilst it remains a thought experiment. But what should the AI-controlled, self-driving car do? What happens when ethical dilemmas are applied in real life?

These trolley scenarios are asking us to make ethical decisions, to form value judgements. In this case to choose the least worst option. Typically we’re then driven towards a utilitarian approach – how can I work for an outcome which helps the most people. The problem is that some still get run over – it wasn’t good for them.

A harder question, might be to consider how we manage competing goods. Choosing between outcomes which on the face of it are both ‘good’, in that no one gets run over now, but indeed might have unforeseen consequences.


If we can envisage that AI has the potential to radically transform society, at an industrial revolution type level then we need to get it right. There is an existential need and ethical imperative for the effects to be good.

But I fear utilitarian or situational approaches to the question of goodness will fall short. We have an alignment issue – not only getting the machines to align to our goals, but as a society being aligned on what our goals are – what would ‘good’ look like?

It seems to me a good thing to be able to live in a society without crime, but how should I weigh that against the value of personal privacy? I suspect we might think the Chinese ‘social credit’ system is unbalanced in favour of crime prevention.

A Christian theological approach is not a simple approach, it doesn’t assume that every ‘good’ can just be measured and metricated – real life is more complex than that.

But where it will perhaps differ and become a strong countervailing power, is in its inherent bias. A Christian approach will deliberately privilege the marginalised and excluded, it will challenge injustice and exploitation, it will genuinely seek what is good for ALL.

Can change be good, if it is not good for all?
Can change be good, if unforeseen harms result, and how do we determine the balance?
Can change be good, if the ‘goods’ disproportionately favour the powerful?

If we do nothing else, let us resolve to work to see AI used for the good of all people.

Book Review: The Christian and Technology

My particular concern in thinking about AI and technology more broadly is around ethics – is it good? As a pastor, I also have a particular concern to think about the interface of faith and technology and both a theological and practical discipline.

‘The Christian and Technology’ by John Fesko is an attempt to do that, a pastoral approach to technology, helping Christians to be thoughtful and considered in their use of technology. Fesko is clear from the outset, this is a ‘devotional book’ original based on a series of chapel lectures, indeed this certainly feels like a pastor warmly speaking to his flock.

Fesko addresses six areas of technology; screens, social media, automobiles, books, virtual reality and the Internet. From the outset it’s encouraging that this book takes technology in its broadest sense, not simply looking at the latest fads, but applying the same principles to areas like books and cars, significant tecnhology advances that we have come to assume.

Each chapter takes a similar approach to technology; helping us to see why it is not neutral, a warning to master, rather than be mastered by technology, and a recognition that where technologies advance it is often accompanied by some moral or societal regression. In this short book, there are only passing references to understanding how technology and society function, but even these brief thoughts help us to see the bigger picture.

The focus of the book is really a pastoral message to Christians. It’s not seeking to dogmatically instruct Christian beaviour, but rather to ask incisive questions to get the reader thinking about their own faith and engagement with technology. Fesko’s ability to ask thoughtful questions of his reader is the best feature of this book.

As a Christian technologist, I’m keen to see the church use technology well, and so I perhaps find Fesko a little on the negative side, though theologically he’s clear that technology is a blessing from God. Yet, I’m also conscious that in a tech-saturated world, perhaps the church has a role to play in providing sactuary and rest from our hyper-connected digital lives. Fesko’s conclusion though is right, echoing Romans 12, “do not be conformed to the patterns of the world and the technologies we use but be transformed according to the reneweing of your minds…”

The Bletchley Declaration

On the first day of the Bletchley AI Summit, all 28 attending nations signed a declaration. It’s signed by the usual Western subjects but encouragingly has a good diversity including; China, Brazil, India, Saudia Arabia and Kenya

You can read the full text of the declaration here.

The declaration sets opportunities alongside risks and concludes with a brief plan for the work ahead.

In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:

  • identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

King Charles on AI

As the Bletchley AI Summit began today this short video from King Charles was released. Like his Father, Charles has been an advocate for technology and innovation, he’s able to see the huge potential, revolutionary opportunities of AI.

Yet, he appeals for international collaboration to recognise and manage risk, keeping it safe and secure. King Charles is will placed to be a massive force for good in bringing governments together to help make AI a ‘force for good’ in the world.

Tegmark vs. LeCun

Most of my recent engagement with AI thoughy has been with those calling for a pause and warning of the risks; the likes of Max Tegmark and Stuart Russell. The other evening Tegmark became embroilled in a small Twitter (X) spatt with Yaan LeCun – you can follow the whole thread below:

LeCun, amongst other titles and qualifications, is the Chief AI Scientist at Meta. Most of the safety narrative at present is around pausing and allowing governments to implement regulative frameworks. LeCun seems to argue persuasively not against regulation, but in favour of an Open Source and transparent approach to building LLMs to avoid power and control being centralised. He makes a powerful case:

Now about open source: your campaign is going to have the exact opposite effect of what you seek. In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we *need* the platforms to be open source and freely available so that everyone can contribute to them.

Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture. This requires that contributions to those platforms be crowd-sourced, a bit like Wikipedia. That won’t work unless the platforms are open.

The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people’s entire digital diet. What does that mean for democracy? What does that mean for cultural diversity? *THIS* is what keeps me up at night.

AI Summit

The Bletchley Park AI Summit begins in just two days and is already generating plenty of news stories and a small but growing series of articles on the website.

It’s helpful to see five objectives layed out, which are:

  1. A shared understanding of the risks posed by frontier AI and the need for action
  2. A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
  3. Appropriate measures which individual organisations should take to increase frontier AI safety
  4. Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
  5. Showcase how ensuring the safe development of AI will enable AI to be used for good globally

There was also a significant announcement from the White House today about their AI policy and the launch of www.ai.gov

Call me cynical, but for lots of talk around risk and safety, both the UK and US announcements seem rather more focused on seizing the initiative (being seen as the ‘leader’), placing regulatory control in the hands of government and seeking to gain an economic advantage of the AI revolution.