Skip to main content

¡ 5 min read
Anthony Nkyi

Algorithms everywhere are the way forward, whether we like it or not, and it's time to accept it.

I wouldn’t disagree with anyone who thinks the algorithms that influence what we see, what we learn, and how we interact with technology and the internet are scary.

Ask how they work? These shapeless entities are sometimes indescribable, even by the engineers who initially wrote them. Then there’s the issues of algorithms that just don’t work for any number of reasons – the harmful human biases behind erroneous outputs, overfitted algorithms crumbling outside the nursery of a training set, or on the other hand, algorithms that are pushed into service too early.

Consider then the possibility that these algorithms don’t just exist on your Instagram feed or the Amazon home page. Consider the possibility that these systems are making real decisions on human lives. Consider a reality where they choose who passes an exam and who fails1; who stays and who is deported2; who is a criminal and who walks free.3

That reality is here already. So why not reform it while we still can? We’re still quite far away from the days of sentient robotic overlords. Before we get there though, it would be a good idea to build responsible AIs, rather than reckless, anthropomorphic ones.

Take the case of the Ministry of Justice (MoJ) reaching out to the big data consultants Palantir about using their tech to calculate prisoners’ reoffending risks.4 Whatever you think of Palantir, or the MoJ for that matter, this marks an auspiciously tech-savvy move from the Civil Service. The British prison system veered dangerously close to a crisis in the summer, and had to take emergency measures, such as early releases of certain eligible prisoners, to ensure it didn’t reach breaking point.5 This was the result of over a decades’ worth of chronic negligence, and it’s only expected to buy the government another few months without a new strategy.

Overcrowding has been an issue for a while – England and Wales have the highest imprisonment rate in Western Europe.6 Also, our prisoners have started spending longer in prison, with a convict serving an average of 20.9 months in 2023, up from 15.5 in 2013.7 Without too much mental maths, you can start to see where the problems pile up. Throw in the fact that two in five adults are reconvicted in less than a year after release, and now retaining this strained system seems almost untenable.

We shouldn’t continue to allow humans to fail to deliver, when we can take this opportunity to build a completely new and improved system. It’s the best time ever to invest time and resources into the effervescent world of AI. The UK government have the chance to go all-in on a project that, pending success, will have tangible benefits for thousands of individuals disadvantaged by a flawed system.

I think reform can begin with what was originally discussed – algorithms that analyse and evaluate prisoners’ reoffending risks but to support human employees with their decision making, instead of full automation. This should reduce the amount of returning convicts for two reasons.

One, that if we produce an improved success rate, i.e. less convicts returning to prison, the problem of overcrowding shrinks immediately. Two, it also allows prisons to identify those who may need more time to readjust to normal life and offer support. Of course, the prison system isn’t currently built for rehabilitation, but that’s another conversation! We could even take it one step further and use it to identify who would benefit more from alternative sentencing. Prison sentences shorter than 12 months are less effective at preventing reoffending than community orders, according to a 2024 Prison Reform Trust study. Granted, discretion is needed (!) but this is just one example from the breadth of potential created by AI.

However, we must ensure the same biases pervasive in other government algorithms are eroded first. Responsible AI principles must be used in this system’s building – stakeholders should understand how it works without it being exploitable, and training sets should be curated to ensure fair representation across class, background, and representation. These systems should also be able to learn from the human decisions made alongside its use, to further tune its decision-making without falling into bias.

The success of technology in failing institutions is the seed for a future powered by it. It’s the seed for a future without decisions made by humans. However, a future without humans is a future without accountability. Let’s hope this future doesn’t arrive too early.

References​

  1. Ofqual, “Requirements for the calculation of results in summer 2020: GCE (AS/A level), GCSE, Extended Project and Advanced Extension Award Qualifications,” Aug. 2020. https://assets.publishing.service.gov.uk/media/5f3e125cd3bf7f1b13f65134/6674_Requirements_for_the_calculation_of_results_in_summer_2020_inc._Annex_G.pdf

  2. H. Warrell, “Home Office under fire for using secretive visa algorithm,” Financial Times, Jun. 09, 2019. https://www.ft.com/content/0206dd56-87b0-11e9-a028-86cea8523dc2

  3. Home Office, “Police Use of Facial Recognition: Factsheet - Home Office in the Media,” homeofficemedia.blog.gov.uk, Oct. 29, 2023. https://homeofficemedia.blog.gov.uk/2023/10/29/police-use-of-facial-recognition-factsheet/

  4. B. Quinn, “Tech firm Palantir spoke with MoJ about calculating prisoners’ ‘reoffending risks,’” The Guardian, Nov. 16, 2024. https://www.theguardian.com/technology/2024/nov/16/tech-firm-palantir-spoke-with-moj-about-calculating-prisoners-reoffending-risks

  5. Ministry of Justice, HM Prison and Probation Service, and Lord Timpson OBE, “Process activated to manage prisoner movements,” gov.uk, Aug. 18, 2024. https://www.gov.uk/government/news/process-activated-to-manage-prisoner-movements

  6. Prison Reform Trust, “Bromley Briefings Prison Factfile,” Feb. 2024. https://prisonreformtrust.org.uk/wp-content/uploads/2024/02/Winter-2024-factfile.pdf

  7. D. Clark, “Average prison sentence length in England and Wales 2000-2019,” Statista, Jul. 18, 2024. https://www.statista.com/statistics/1100628/prison-sentence-length-in-england-and-wales-over-time/

¡ 4 min read
Sofiya Flenova

"These breakthroughs introduce us to a golden era of science and completely rewrite our future."

Demis Hassabis, CEO of DeepMind, junior chess prodigy, former game developer and, of course, UCL alumnus, made headlines last week by joining the ranks of Nobel Laureates in Chemistry. He and his colleague, John M. Jumper, senior science researcher at DeepMind, achieved a game-changing breakthrough by pioneering new methods for predicting protein structures with Artificial Intelligence, and were awarded the half the Nobel Prize for their efforts.​

Along with that, David Baker, professor of biochemistry and Director of the Institute for Protein Design at the University of Washington has been awarded the other half of the 2024 Nobel Prize in Chemistry for computational protein design.

These breakthroughs introduce us to a golden era of science and completely rewrite our future. And here's why we must talk about it.

David Baker​

“In nature, proteins are the miniature machines that carry out all the important jobs: we can think, we can move, we can digest food, plants can capture energy from the sunlight and everything that happens in the living organism is due to proteins. We can use proteins to solve problems that evolution did not manage to solve.” - David Baker.

Each protein chain folds into its own characteristic shape and the folding process is very precise. The shape of a folded protein chain is what defines its biological functions. However, there are so many different shapes a protein can adopt, which made the protein problem so difficult and rendered it unsolved for over 50 years. Up until now.

The gigantic increase in computing power since the problem’s discovery now enables us to design tens of thousands of new proteins with new shapes and new functions. There are now over 10130 different designs we can explore using computation, enormously larger than the total number of proteins that have existed since life on earth began. After creation, we can extract the proteins, and then determine their functions and whether they are safe.

Today we face challenges such as serious ecological threats as well as new diseases evolving, and we do not have millions of years to wait for the discovery of the right proteins. But using computational design tools, we can now build these completely new multi-purpose proteins.

John M. Jumper and Demis Hassabis​

The AI system AlphaFold2 by DeepMind is the first non-experimental method that can predict the complex structure of any known protein in nature, also solving the "50-year grand challenge", in the words of Hassabis himself. This system can predict the way a protein folds based on the amino acid sequence it consists of, which enables us to create proteins that can perform very specific functions and help us drive humanity's development.

Until very recently it could take research biologists a year to get a single answer about a 3D shape of a protein fold; now we have a machine learning algorithm that can do the same in 5-10 minutes.

Alt text

The program consists of three stages: database search and preprocessing, EvoFormer, and structure model.

Database search and preprocessing. A sequence of amino acids is entered, and AlphaFold compares it to records from several databases to extract similar sequences from other organisms. It also creates a pair representation of this input sequence, showing pairs of amino acid residues that are close together in 3D space within the target protein. Residue is the part of an amino acid that remains after it forms a peptide chain with other amino acids, and water is removed.

EvoFormer is a unique AlphaFold neural network that looks for relationships in residue pairs of the input sequence. It also evaluates the relationship within any two residues, which can be thought of as nodes in NNs. These calculations are carried out 48 times before forming a refined model of residue pair representations.

Finally, the Structure Model is another neural network that takes the previously refined model and performs rotation and translation on it to create a prediction of what its 3D protein structure looks like.

These newly developed proteins can fight cancer, break down plastic waste, and develop vaccines for respiratory diseases among many other uses. This intersection of biology and AI in 2024 can help us uncover the secrets of life, fight diseases, and even address the overwhelming climate challenges of today and the future.

¡ 2 min read
Anthony Nkyi

Welcome (or welcome back) to UCL Artificial Intelligence Society’s blog!​

I’m Anthony, the new Head of Content, and I just can’t wait for everything we’ve got planned for UCL AI Society this year.

The year ahead is looking exciting – there’s a lot going on. We have the return of our incredible initiatives. We’re getting a new season of Reinforcement Talking, the UCL AI Society podcast (episode coming out VERY soon). And of course, our brilliant blog is back for another year. In this mini-update, I'm going to walk you through what's happening in the coming days and weeks.

What's on?​

Tutorials are back, where you can learn how to code a vast range of machine learning techniques, completely free for our members. It's every Wednesday, open to all ranges of abilities, so get it onto your calendars.

Then we have Nexus Labs, where you can put these skills to work on a research project, exploring one of neuroscience, robotics, GenAI, practical AI, or responsible AI, culminating in an epic symposium where you can present your findings.

For the visionaries and entrepreneurs amongst our membership (yes, get yours now if you haven’t), you can apply to AI Foundry, which provides you with a series of workshops, mentoring opportunities, and chances to pitch your brainchildren to keen investors, and industry experts.

Finally, Journal Clubs invite some of the world’s greatest minds in AI for live and exclusive talks about their research – we’ve hosted individuals from top institutions across the globe talking about everything between machine learning theory and its applications.

That’s not just it though. We’ve got hackathons coming up, some big events, career panels, and more… but stay updated through this blog and our members’ newsletter to hear about these as soon as they’re confirmed.

¡ 6 min read
Charlie Harrison

AI Safety Debate​

It was a real privilege to host, alongside two of the best societies at UCL – AI Society, and UCL Effective Altruism – our AI Safety Debate, on the topic of, “Is AI an existential risk?”

🎬 A full recording can be found here, if you want to watch the whole thing.

Why talk about AI safety?​

I believed it was important to host this debate because I think this question is, potentially, highly important, but also, one which I have deep uncertainties about. Many AI experts like Geoffrey Hinton think that AI should be considered just as risky as pandemics or nuclear war, and that we need to slow down, or pause its development. Others, like Melanie Mitchell, believe that the risks are “almost vanishingly small”. The stakes of the question for humanity warrant a serious (and long, in my opinion) conversation about the respective arguments’ merits.

After this prelude, my wonderful colleagues Ivana and Maja introduced the speakers. We were lucky enough to have Reuben Adams and Chris Watkins arguing for the “doomer” side (as we referred to it, in the WhatsApp group chat). Reuben is a UCL AI PhD student, and host of the wonderful ‘Steering AI’ Podcast. Chris is a professor at Royal Holloway, and a prominent thinker in the ‘Reinforcement Learning’ field. For the ‘risk-skeptical side’ (the ‘anti-doomer’ side??), was Jack Stilgoe and Kenn Cukier. Jack is a UCL professor, on “home-turf” as he said, lecturing in Science and Technology Studies (STS), and works closely with UK Research and Innovation on the “Responsible AI Program”. Kenn is a Deputy Executive Editor at The Economist magazine, and hosts the weekly tech podcast, “Babbage” . Tom Ough, a freelance journalist, who’s written various pieces about existential risk, including in Prospect Magazine, was moderating. Ivana encouraged our audience to consider how the lack of demographic diversity on the panel could systematically bias the conversation, which (as you’ll read) came up in discussions.

At face value, the 4 speakers seemed to argue distinctly opposing points of view. I will briefly give my best effort at summarizing their views, in the order in which they spoke. After, I look to set out promising areas of agreement amongst all the panelists.

The debate​

Reuben opened the debate. He argued that the new paradigm of deep-learning presents a distinctly new category of AI risk: we are building ever-more intelligent ‘black-boxes’, with novel capabilities we cannot predict. Once there is a “second species of intelligence” that rivals our own, we are completely ignorant about what will follow. Our current tools for controlling AI systems, like ‘RLHF’, are woefully inadequate even at present, and won’t ‘scale up’ with increasing AI progress. What follows from all this? “I don’t understand how you can confidently say that this doesn’t end badly.”

After Reuben, came Jack. He eased his way into his argument with several cool anecdotes. (From Reuben’s speech, the day after Earnest Rutherford denied the feasibility of nuclear energy, in September 1933, Leo Szilard conceived of the nuclear chain reaction. Jack said that the concept came to Szilard by Russell Square Station. Go there if you want to conceive of the next big thing). Anyway. Back to the seriousness. Rogue AI scenarios are implausible and belong in science fiction. Instead, “the idea of existential risk from AI is a form of displacement activity”, from other more pressing concerns, like the disempowerment of workers or marginalization of minorities. These are the risks that deserve regulators’ attention. A more interesting question, for Jack, is why people are drawn towards believing these risks: perhaps peoples’ positionality, or for some technologists their self-interest. AI is a tool like any other, in that it’s “all about power”, so “we shouldn’t be worrying about what robots will do to humanity, instead we should worry about what some people will do to other people”.

After Jack, there was Chris. From his perspective, the algorithmic breakthroughs that enabled ChatGPT are pedestrian. His MSc students are already implementing the “transformer” architecture for their coursework, the major breakthrough behind large language models like ChatGPT. Given that tens of thousands of people and 11 figure sums are being directed towards AI, we have no reason to believe that further breakthroughs won’t occur. Instead, we should expect a future of open-ended cognitive advancement. This unknowable future is “behind a veil”. While we aren’t necessarily destined for doom, there are plausible “side-roads” that lead towards it, in particular AI-enabled authoritarian regimes.

Finally, it was Kenn’s turn. “This is bleak!”, he started with. Whilst the risks from AI are serious, they won’t scale to an ‘existential catastrophe’. Existing alignment techniques like RLHF put humans in the loop, and will obstruct any ‘intelligence explosion’. Humans are unlikely to cede control of political power or nuclear missile systems to AI. Among different possible futures, we can design “love” into AI. In contrast, misuse risks do seem concerning. Kenn was anxious when news broke in 2022 that AI had developed 40,000 toxic chemicals in 6 hours. However, there is nuance here. The threat model of ‘misuse risks’ from bad actors already exists today. Lethal Autonomous Weapons may make warfare less brutal. So, let’s not be defeatist, and instead focus on “existential solutions”.

(Dis)agreements!​

A key disagreement for the participants seemed to be: Reuben and Chris seemed to acknowledge that exact pathways to catastrophe are unknowable – and would be analogous to bonobos trying to predict how they would be outcompeted by humans. Kenn, and particularly Jack, emphasized this point, and suggested that the ‘rogue AI story’ parallels science fiction. Reuben/Chris seem to bite the bullet.

However, amongst these disagreements, there were several areas of agreement, which questions from our moderator, Tom, helped to elucidate:

  • Proactive oversight/regulation of AI systems today is necessary to guard against present-day harms, like misinformation.
  • Careful evaluations of AI models is an area of potential common ground between those concerned about ‘near-term’ and ‘long-term risks’ from AI.
  • AI represents a new (potentially transformative) era for humanity
  • Predicting exactly how the future will unfold is nigh on impossible; speculation about how exactly AI harms might scale to catastrophe or even extinction is very difficult to conceive precisely.
  • AI is likely to be a “force-multiplier” and may enable bad actors to do worse things

On these points, and others, I think the speakers realized that their worldviews were closer than they might have expected.

I am very grateful to Ivana, Maja, Asmita for helping with the organizing of the event, and to Andrzej and Yadong for helping with the filming.

¡ 4 min read
Martynas Pocius
Jess Tsang

Hello and welcome to the very first UCL AI Society blogpost of the year!​

My name is Jessica and I’m so excited to be your Creative Director this year 😄, You’ll be hearing from me every week about the AI buzz and news on the society, and any upcoming events you should bookmark in your calendar, so be sure to keep your eyes peeled on the society blog to stay updated 📚️.

With the introductions out of the way, today’s blog post is for all our new members and getting you up-to-date on what this is all about. First of all, massive congratulations on getting into UCL! That is no small feat, and we’re so glad you were able to join us in not-so-sunny London 🌧️ (it gets better in February, trust). Secondly, congratulations on having excellent taste in societies, and frankly, joining the best one out there. We promise you’re going to have a good time.

Since our inception, the UCL AI Society has grown massively, and we’re proud to say that we are one of the most prolific student societies at UCL. We’d like to think it’s because we have something for everyone, even for those who aren’t compsci students or know how to code! Artificial intelligence is something that will affect everyone in the years to come, regardless of who you are, and as a result, we believe our society should be accessible and relevant to all pathways and disciplines. We hope the AI society can be your place to explore your interests, build long-lasting relationships and make memories for life. Some of the social events you can get involved in are our iconic Thursday Pizza Socials 🍕 (free pizza for all!), speaker events with world-renowned researchers 🧑‍🏫, tutorials 💻️, cross-university networking events 🧑‍🤝‍🧑, and more.

We also have programs running on a larger scale for those of you who want to improve your own skills, dip into research or begin a start-up. For research, we have the Nexus Labs project 🧑‍💻, which is an interdisciplinary initiative for students to engage in one of five academic pillars: Neuroscience, NLP, Finance, Machine Vision and Responsible AI ✏️. Within each pillar, teams will have a specialist on board to explore a question with the aim of publishing a research paper at the end of the project.

For those of you wanting to explore the world of entrepreneurship, we have the UCL AI Foundry 📈, our incubator for budding AI businesses. Upon joining Foundry, you’ll be given guidance, support and advice throughout the project from assigned mentors. At the end, you’ll be given an opportunity to pitch your start-up to a panel of investors who can help your business take off.

And finally, our biggest event of the year: ClimateHack.AI 🌎️, our society hackathon which brings together 25 leading universities to solve some of our climate’s most pressing problems.

This year, ClimateHack.AI is helping to solve the issue of solar nowcasting ☀️, essentially helping us optimise solar panels as solar power is highly variable considering weather conditions. By building better solar photovoltaic nowcasting, we’re able to help derisk the deployment of solar power and encourage its use in energy grids around the world. In the UK alone, this could help us reduce 100,000 tonnes of carbon emissions. Around the world, there would be a reduction of 50 to 100 million tonnes of carbon.

For ClimateHack.AI 2022, we focused on satellite imagery nowcasting 🛰️ and managed to build a model 3.5x more accurate than the one being used by the National Grid Electricity System Operator (NG-ESO). As a result, we know the impact ClimateHack.AI has, and we’ve seen the good it can do. For ClimateHack.AI 2023, we encourage you to participate if this is something that catches your eye! We can’t make a tangible impact without your help, and the hackathon has been some of our previous members’ favourite part of the society, so we guarantee it’ll be a good time. You’ll be able to meet like-minded individuals, expand your network and career opportunities, and have a shot at our prize pool with a grand first prize of £15,000 💰️!

Ultimately, as you can probably see, UCL AI Society is packed full of different events happening every week, and as your committee, we’re dedicated to making sure this society provides everything possible to help you thrive both personally and professionally 🫡. Regardless, we hope we’ll see you in some of these events once the school year starts up, and as always, feel free to reach out with any questions.

See you next week! 👋

Jessica