Anthropic is seeking a Research Engineer for their Frontier Red Team to develop and implement critical safety evaluations for advanced AI systems. This role is central to Anthropic's Responsible Scaling Policy (RSP), focusing on ensuring the safe deployment of frontier AI models.
The position involves creating sophisticated evaluation systems to assess and control some of the most capable AI systems ever developed. You'll work across multiple crucial areas including biosecurity, autonomous replication, cybersecurity, and national security. Your responsibilities will include building, scaling, and running evaluations to measure potentially dangerous capabilities in models and determining when they cross ASL thresholds requiring enhanced security measures.
As a Research Engineer, you'll be at the forefront of AI safety, working in a hybrid environment between San Francisco or Seattle offices. The role offers competitive compensation ranging from $280,000 to $425,000 USD annually, along with comprehensive benefits including medical insurance, visa sponsorship, and parental leave.
The ideal candidate should have strong software engineering skills, particularly in Python, experience with distributed systems, and a background in conducting experiments with frontier AI models. You'll need to balance rapid prototyping with maintaining high engineering standards while working on unprecedented technical challenges.
This position represents a unique opportunity to influence AI safety standards across the industry while working with cutting-edge technology. The role requires a minimum of a Bachelor's degree or equivalent experience, with a hybrid work arrangement requiring at least 25% office presence. Anthropic values diversity and encourages applications from candidates of all backgrounds, emphasizing the importance of varied perspectives in addressing the social and ethical implications of AI development.