On the recommendation of Elon Musk no less, I picked up a copy of Life 3.0, by Max Tegmark, first published in 2017. Tegmark is Professor at MIT and co-founder of the Future of Life Institute. Whilst this book is primarily about AI, Tegmark is a cosmologist by trade, an interest which is apparent throughout the book.
The book is largely scenario driven, lots of “what ifs” – at first sight Tegmark appears speculative at best, living up to his nickname “Mad Max”. But the scenarios are creatively written leaving us with many more questions, but opening up the world of Artificial Intelligence. Tegmark is able to vividly paint a best-case utopian vision, whilst being clear in his appeal to heed the very real risks.
Tegmark is kind to the non-technical reader, offering some simple (as far as possible) explanations for the foundations of this discussion; how machines and particularly computation works, memory and a quick introduction to machine learning, LLMs and neural networks.
Tegmark argues persuasively for a longtermism approach and the moral imperatives to get AI right. I have to confess rather skipping chapter 6 – “The Next Billion Years and Beyond” – I live in a country where longtermism would be to actually deliver a 20-year infrastructure project, so I’m happy to limit my thinking to decades and centuries for now! But you might still find this chapter interesting if the world of theoretical cosmology floats your boat, as Stuart Russell endorses – “For sheer science fun, it’s hard to beat.”
Chapter 3, ‘The near future: breakthroughs, bugs, laws, weapons and jobs’, will certainly feel the most useful to assess the current moment. It charts some of the progress in AI over recent decades and where we might feasibly be in the next few years and decades. This section provides much excitement around the potential of AI to do good and lot of pauses for thought and deeper concerns around ethical and existential issues. It perhaps leaves us with more questions than answers – would it be a good thing for AI to replace jobs? Is it too late to stop bad actors developing AI into weapons of war? Though, personally speaking it’s encouraging that Tegmark lists ‘clergy member’ as a ‘safe job’, unlikely to be replaced by AI any time soon!
The later discussion around goals, adoption, alignment and the ethical implications of these feels especially relevant to my own research and wider reading. There is I think a right appeal to ‘goodness’ as a key principle around AI ethics, and Tegmark is right to caution us around the subjectivity of this word, both in previous history and the potential for diveregence in the future. He graps at various principles that have helped humanity across the centuries to define ‘good’ – platonic forms, Golden Rules, and The Ten Commandments. Tegmark distills the ethical wisdom of the ages into four principles; Utilitarianism, Diversity, Autonomy and Legacy.
There’s lots of wisdom here and it’s written non-dogmatically, encouraging the reader to think, explore and ask further questions. Personally I think Tegmark’s ethical approach misses the mark on two counts. Firstly, it doesn’t attempt to engage thoroughly with the wisdom of the ages eg. why has the Golden Rule stood the test of time, why does it cross cultures and generations, how could it be applied to the field of AI? Secondly, I’m not sure that Tegmark has really grasped the first part of the book’s subtitle – ‘Being Human’. Throughout the book he describes humans as evolved animals, a ‘historical accident’ and appeals more to the ways in which machines can be ‘like us’ without every really defining who we are as humans, or what makes us unique. For someone who is concerned about the existential threat of AI it would be it would be more comforting for AI developers to have a more speciest approach to humanity!
To be fair to Tegmark, he rather concedes that his ethical framework for humanity is incomplete – “What’s ‘meaning’? What’s ‘life’? What’s the ultimate ethical imperative?… This makes it timely to rekindle the classic debates of philiosophy and ethics, and adds a new urgency to the conversation!” Perhaps this is the point, for all the novelty, excitement and opportunity that AI might afford, might we need to rekindle some of the classic arts of philosophy, ethics and dare we say, theology?
His final (main) chapter on consciousness is an honest approach to a very difficult topic, yet with humility and scientific enquiry he seeks to explore with us. Though he does accuse some others of approaching the question with ‘anthropological bias’. This is again where I feel Tegmark misses his point, when considering questions of humanity and indeed of existential dilemma for humanity, it’s ok to be a little anthropocentric and to value ‘human exceptionalism’ because I don’t think most people are ready, or happy to concede the human and artificial ‘consciousness’ are equivalent in value.
Notwithstanding my not insignificant concerns with some of Tegmark’s approach, this book is very readable and thought provoking. It’s particularly good because overall it sets out a postitive vision (without being naive) and is personal in tone – you feel like you’re sitting in the seminar room, chatting it through with Max.