Skip to main content
Back
Anthropic

AI Safety Fellow Interview Experience

Anthropic
Timespan
3 weeks
Difficulty
Difficult

This process felt very different from most of the interviews I've done. There was no classic DS&A round at all. Instead, they tested raw coding implementation speed early on with database-style CodeSignal rounds, and then the final loop shifted hard into research thinking and a pretty fundamental understanding of LLMs. The most unusual part was the structure of the final: a 15-minute open-ended alignment brainstorm with almost no interviewer feedback, followed by a 55-minute Colab notebook exercise where I had to complete part of an LLM inference workflow. It also felt like a newer process when I did it, so I was mostly inferring what to expect from the emails rather than finding much info online.

This process felt very different from most of the interviews I've done. There was no classic DS&A round at all. Instead, they tested raw coding implementation speed early on with database-style CodeSignal rounds, and then the final loop shifted hard into research thinking and a pretty fundamental understanding of LLMs. The most unusual part was the structure of the final: a 15-minute open-ended alignment brainstorm with almost no interviewer feedback, followed by a 55-minute Colab notebook exercise where I had to complete part of an LLM inference workflow. It also felt like a newer process when I did it, so I was mostly inferring what to expect from the emails rather than finding much info online.

Share your experience to unlock this interview experience

Get free access to all interview experiences when you share yours, or become a member to unlock the entire platform.