Skip to main content

· 6 min read
Charlie Harrison

AI Safety Debate

It was a real privilege to host, alongside two of the best societies at UCL – AI Society, and UCL Effective Altruism – our AI Safety Debate, on the topic of, “Is AI an existential risk?”

🎬 A full recording can be found here, if you want to watch the whole thing.

Why talk about AI safety?

I believed it was important to host this debate because I think this question is, potentially, highly important, but also, one which I have deep uncertainties about. Many AI experts like Geoffrey Hinton think that AI should be considered just as risky as pandemics or nuclear war, and that we need to slow down, or pause its development. Others, like Melanie Mitchell, believe that the risks are “almost vanishingly small”. The stakes of the question for humanity warrant a serious (and long, in my opinion) conversation about the respective arguments’ merits.

After this prelude, my wonderful colleagues Ivana and Maja introduced the speakers. We were lucky enough to have Reuben Adams and Chris Watkins arguing for the “doomer” side (as we referred to it, in the WhatsApp group chat). Reuben is a UCL AI PhD student, and host of the wonderful ‘Steering AI’ Podcast. Chris is a professor at Royal Holloway, and a prominent thinker in the ‘Reinforcement Learning’ field. For the ‘risk-skeptical side’ (the ‘anti-doomer’ side??), was Jack Stilgoe and Kenn Cukier. Jack is a UCL professor, on “home-turf” as he said, lecturing in Science and Technology Studies (STS), and works closely with UK Research and Innovation on the “Responsible AI Program”. Kenn is a Deputy Executive Editor at The Economist magazine, and hosts the weekly tech podcast, “Babbage” . Tom Ough, a freelance journalist, who’s written various pieces about existential risk, including in Prospect Magazine, was moderating. Ivana encouraged our audience to consider how the lack of demographic diversity on the panel could systematically bias the conversation, which (as you’ll read) came up in discussions.

At face value, the 4 speakers seemed to argue distinctly opposing points of view. I will briefly give my best effort at summarizing their views, in the order in which they spoke. After, I look to set out promising areas of agreement amongst all the panelists.

The debate

Reuben opened the debate. He argued that the new paradigm of deep-learning presents a distinctly new category of AI risk: we are building ever-more intelligent ‘black-boxes’, with novel capabilities we cannot predict. Once there is a “second species of intelligence” that rivals our own, we are completely ignorant about what will follow. Our current tools for controlling AI systems, like ‘RLHF’, are woefully inadequate even at present, and won’t ‘scale up’ with increasing AI progress. What follows from all this? “I don’t understand how you can confidently say that this doesn’t end badly.”

After Reuben, came Jack. He eased his way into his argument with several cool anecdotes. (From Reuben’s speech, the day after Earnest Rutherford denied the feasibility of nuclear energy, in September 1933, Leo Szilard conceived of the nuclear chain reaction. Jack said that the concept came to Szilard by Russell Square Station. Go there if you want to conceive of the next big thing). Anyway. Back to the seriousness. Rogue AI scenarios are implausible and belong in science fiction. Instead, “the idea of existential risk from AI is a form of displacement activity”, from other more pressing concerns, like the disempowerment of workers or marginalization of minorities. These are the risks that deserve regulators’ attention. A more interesting question, for Jack, is why people are drawn towards believing these risks: perhaps peoples’ positionality, or for some technologists their self-interest. AI is a tool like any other, in that it’s “all about power”, so “we shouldn’t be worrying about what robots will do to humanity, instead we should worry about what some people will do to other people”.

After Jack, there was Chris. From his perspective, the algorithmic breakthroughs that enabled ChatGPT are pedestrian. His MSc students are already implementing the “transformer” architecture for their coursework, the major breakthrough behind large language models like ChatGPT. Given that tens of thousands of people and 11 figure sums are being directed towards AI, we have no reason to believe that further breakthroughs won’t occur. Instead, we should expect a future of open-ended cognitive advancement. This unknowable future is “behind a veil”. While we aren’t necessarily destined for doom, there are plausible “side-roads” that lead towards it, in particular AI-enabled authoritarian regimes.

Finally, it was Kenn’s turn. “This is bleak!”, he started with. Whilst the risks from AI are serious, they won’t scale to an ‘existential catastrophe’. Existing alignment techniques like RLHF put humans in the loop, and will obstruct any ‘intelligence explosion’. Humans are unlikely to cede control of political power or nuclear missile systems to AI. Among different possible futures, we can design “love” into AI. In contrast, misuse risks do seem concerning. Kenn was anxious when news broke in 2022 that AI had developed 40,000 toxic chemicals in 6 hours. However, there is nuance here. The threat model of ‘misuse risks’ from bad actors already exists today. Lethal Autonomous Weapons may make warfare less brutal. So, let’s not be defeatist, and instead focus on “existential solutions”.


A key disagreement for the participants seemed to be: Reuben and Chris seemed to acknowledge that exact pathways to catastrophe are unknowable – and would be analogous to bonobos trying to predict how they would be outcompeted by humans. Kenn, and particularly Jack, emphasized this point, and suggested that the ‘rogue AI story’ parallels science fiction. Reuben/Chris seem to bite the bullet.

However, amongst these disagreements, there were several areas of agreement, which questions from our moderator, Tom, helped to elucidate:

  • Proactive oversight/regulation of AI systems today is necessary to guard against present-day harms, like misinformation.
  • Careful evaluations of AI models is an area of potential common ground between those concerned about ‘near-term’ and ‘long-term risks’ from AI.
  • AI represents a new (potentially transformative) era for humanity
  • Predicting exactly how the future will unfold is nigh on impossible; speculation about how exactly AI harms might scale to catastrophe or even extinction is very difficult to conceive precisely.
  • AI is likely to be a “force-multiplier” and may enable bad actors to do worse things

On these points, and others, I think the speakers realized that their worldviews were closer than they might have expected.

I am very grateful to Ivana, Maja, Asmita for helping with the organizing of the event, and to Andrzej and Yadong for helping with the filming.

· 4 min read
Martynas Pocius
Jess Tsang

Hello and welcome to the very first UCL AI Society blogpost of the year!

My name is Jessica and I’m so excited to be your Creative Director this year 😄, You’ll be hearing from me every week about the AI buzz and news on the society, and any upcoming events you should bookmark in your calendar, so be sure to keep your eyes peeled on the society blog to stay updated 📚️.

With the introductions out of the way, today’s blog post is for all our new members and getting you up-to-date on what this is all about. First of all, massive congratulations on getting into UCL! That is no small feat, and we’re so glad you were able to join us in not-so-sunny London 🌧️ (it gets better in February, trust). Secondly, congratulations on having excellent taste in societies, and frankly, joining the best one out there. We promise you’re going to have a good time.

Since our inception, the UCL AI Society has grown massively, and we’re proud to say that we are one of the most prolific student societies at UCL. We’d like to think it’s because we have something for everyone, even for those who aren’t compsci students or know how to code! Artificial intelligence is something that will affect everyone in the years to come, regardless of who you are, and as a result, we believe our society should be accessible and relevant to all pathways and disciplines. We hope the AI society can be your place to explore your interests, build long-lasting relationships and make memories for life. Some of the social events you can get involved in are our iconic Thursday Pizza Socials 🍕 (free pizza for all!), speaker events with world-renowned researchers 🧑‍🏫, tutorials 💻️, cross-university networking events 🧑‍🤝‍🧑, and more.

We also have programs running on a larger scale for those of you who want to improve your own skills, dip into research or begin a start-up. For research, we have the Nexus Labs project 🧑‍💻, which is an interdisciplinary initiative for students to engage in one of five academic pillars: Neuroscience, NLP, Finance, Machine Vision and Responsible AI ✏️. Within each pillar, teams will have a specialist on board to explore a question with the aim of publishing a research paper at the end of the project.

For those of you wanting to explore the world of entrepreneurship, we have the UCL AI Foundry 📈, our incubator for budding AI businesses. Upon joining Foundry, you’ll be given guidance, support and advice throughout the project from assigned mentors. At the end, you’ll be given an opportunity to pitch your start-up to a panel of investors who can help your business take off.

And finally, our biggest event of the year: ClimateHack.AI 🌎️, our society hackathon which brings together 25 leading universities to solve some of our climate’s most pressing problems.

This year, ClimateHack.AI is helping to solve the issue of solar nowcasting ☀️, essentially helping us optimise solar panels as solar power is highly variable considering weather conditions. By building better solar photovoltaic nowcasting, we’re able to help derisk the deployment of solar power and encourage its use in energy grids around the world. In the UK alone, this could help us reduce 100,000 tonnes of carbon emissions. Around the world, there would be a reduction of 50 to 100 million tonnes of carbon.

For ClimateHack.AI 2022, we focused on satellite imagery nowcasting 🛰️ and managed to build a model 3.5x more accurate than the one being used by the National Grid Electricity System Operator (NG-ESO). As a result, we know the impact ClimateHack.AI has, and we’ve seen the good it can do. For ClimateHack.AI 2023, we encourage you to participate if this is something that catches your eye! We can’t make a tangible impact without your help, and the hackathon has been some of our previous members’ favourite part of the society, so we guarantee it’ll be a good time. You’ll be able to meet like-minded individuals, expand your network and career opportunities, and have a shot at our prize pool with a grand first prize of £15,000 💰️!

Ultimately, as you can probably see, UCL AI Society is packed full of different events happening every week, and as your committee, we’re dedicated to making sure this society provides everything possible to help you thrive both personally and professionally 🫡. Regardless, we hope we’ll see you in some of these events once the school year starts up, and as always, feel free to reach out with any questions.

See you next week! 👋