← Events
Beginner Friendly Running Now

Fundamental & Policy 2026: AI Safety Technical Course

A ten-week guided course on Technical AI Safety, following the BlueDot Impact curriculum.

fellowshipintroductorybluedotupcomingtechnical

Course Material: Technical AI Safety Curriculum

The BlueDot Fellowship 2026 at NTUAIS will follow the BlueDot Impact “Technical AI Safety” curriculum. This course focuses on the question: “How do we make AI go well?”

Course Overview

On this course, we will identify the future we’re working toward and understand the key dynamics of AI safety:

  • Drivers of AI progress: compute, data, algorithms
  • Threat pathways: power concentration, gradual disempowerment, catastrophic pandemics, critical infrastructure collapse
  • Plans for making AI go well: government control over AGI, hand over control to aligned superintelligence, build defences and diffuse AI
  • Layers of defences to build: prevent dangerous AI actions → constrain dangerous AI capabilities → withstand dangerous AI actions

This course focuses on defining what AI systems we are building and how.

Learning Outcomes

You will gain the technical foundation to understand what it will actually take to make AI systems safer – and why it’s so challenging.

Throughout the course, you will:

  • Diagnose why making AI safe is technically challenging
  • Evaluate current safety techniques: what works, what doesn’t, where the gaps are
  • Build your own “kill chain” showing how defences might break
  • Identify the most promising intervention point for your contribution
  • Leave with a fundable action plan to start shipping

What this course isn’t

Though important for making AI go well, we’ll cover the following in separate tracks or courses:

  • AI policy details: though you’ll gain the technical grounding for effective AI governance
  • Compute governance: hardware verification and tracking deserve their own deep dive
  • AI security: e.g. preventing model theft or escape
  • ML basics: please ensure you have a basic understanding of ML before joining (or complete AI foundations modules first)

Join Us

Let’s start with the question: “How might we build safe AI?”

Start Date: TBD to Late February 2026 Format: Weekly reading + Discussion sessions