Research Engineer / Scientist, Alignment Science (London)

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.
$230,000 - $515,000
Machine Learning
Senior Software Engineer
Hybrid
51 - 100 Employees
5+ years of experience
This job posting may no longer be active. You may be interested in these related jobs instead:
Machine Learning Systems Engineer, RL Engineering

Senior ML Systems Engineer role at Anthropic focused on building and improving reinforcement learning systems for AI model training

Research Engineer, Knowledge Team

Senior Research Engineer position at Anthropic focused on redesigning how AI systems interact with external data sources through innovative information architectures and LLM training.

Research Engineer, Frontier Red Team (RSP Evaluations)

Senior Research Engineer position at Anthropic focusing on AI safety evaluations and implementing responsible scaling policies for frontier AI models.

Software Engineer

Senior Software Engineering role at Anthropic focusing on building and optimizing large-scale ML systems, with emphasis on AI safety and interpretability.

Senior Software Engineer, LLM Inference

Senior Software Engineer position at NVIDIA focusing on LLM Inference development, requiring expertise in C++, deep learning, and AI technologies.

Description For Research Engineer / Scientist, Alignment Science (London)

Anthropic is seeking a Research Engineer / Scientist for their Alignment Science team in London. The role involves building and running machine learning experiments to understand and steer the behavior of powerful AI systems. Key responsibilities include testing safety techniques, running multi-agent reinforcement learning experiments, building tooling for LLM jailbreak evaluations, and contributing to research papers. The ideal candidate has software and ML experience, familiarity with AI safety research, and a collaborative mindset. Strong candidates may have experience with LLMs, reinforcement learning, and complex codebases. The role offers competitive compensation, equity, and benefits, with a focus on making AI helpful, honest, and harmless.

Last updated 9 months ago

Responsibilities For Research Engineer / Scientist, Alignment Science (London)

  • Build and run machine learning experiments to understand and steer AI behavior
  • Test robustness of safety techniques
  • Run multi-agent reinforcement learning experiments
  • Build tooling for LLM jailbreak evaluations
  • Write scripts and prompts for model evaluation
  • Contribute to research papers, blog posts, and talks
  • Run experiments for key AI safety efforts

Requirements For Research Engineer / Scientist, Alignment Science (London)

Python
  • Significant software, ML, or research engineering experience
  • Experience contributing to empirical AI research projects
  • Familiarity with technical AI safety research
  • Preference for fast-moving collaborative projects
  • Willingness to pick up slack outside job description
  • Care about the impacts of AI

Benefits For Research Engineer / Scientist, Alignment Science (London)

Equity
Medical Insurance
Dental Insurance
Vision Insurance
401k
Education Budget
Parental Leave
  • Equity
  • Health insurance
  • Dental insurance
  • Vision insurance
  • 401k with 4% matching
  • 22 weeks paid parental leave
  • Unlimited PTO
  • Education stipend
  • Wellness stipend
  • Fertility benefits
  • Daily lunches and snacks
  • Relocation support

Interested in this job?