Anthropic is seeking a Research Engineer to join their Pretraining team, focusing on developing the next generation of large language models. This role sits at the intersection of cutting-edge research and practical engineering, contributing to the development of safe, steerable, and trustworthy AI systems. The position involves working on high-impact projects like optimizing novel attention mechanisms, comparing compute efficiency of Transformer variants, and scaling distributed training to thousands of GPUs.
The role requires an advanced degree in Computer Science or Machine Learning, with strong software engineering skills and expertise in Python and deep learning frameworks. The ideal candidate will balance research goals with practical engineering constraints while working in a collaborative environment that values communication and ethical considerations.
Anthropic offers a competitive compensation package ranging from $315,000 to $340,000 USD annually, along with comprehensive benefits including medical insurance, parental leave, and flexible working hours. The position follows a hybrid work model requiring 25% in-office presence in either San Francisco, Seattle, or New York City.
The company operates as a single cohesive team focused on large-scale AI research efforts, emphasizing impact over smaller, specific puzzles. They view AI research as an empirical science and value collaborative work, pair programming, and a strong focus on AI safety and ethics. The role offers an opportunity to work on ambitious goals for AI safety while contributing to long-term positive outcomes in the field.
This position is ideal for candidates who are passionate about AI safety, enjoy collaborative research, and want to contribute to ensuring that transformative AI systems are aligned with human interests. The team culture emphasizes empirical research, open discussion, and a commitment to developing safe and beneficial AI systems.