Anthropic is seeking a Staff Software Security Engineer to help secure their frontier AI research and systems. In this role, you'll design and implement secure controls for AI training pipelines, apply security architecture patterns, and secure model weights as capabilities scale. You'll perform security reviews, threat modeling, and vulnerability assessments. You'll also support responsible disclosure programs, mentor other security engineers, and lead large security initiatives like multi-party authorization for critical AI infrastructure.
Key responsibilities include: • Implementing secure-by-default controls for software supply chain, AI model training, and deployment • Conducting security architecture reviews and threat modeling • Supporting bug bounty programs and on-call rotations • Mentoring and coaching other security engineers • Building security awareness across the organization • Leading major security initiatives
The ideal candidate will have 10+ years of security-focused software development experience, a track record of scaling security programs, and expertise in languages like Rust and Python. You should be passionate about making AI systems safer and more aligned with human values.
Anthropic offers competitive compensation including salary, equity, and benefits. They have a hybrid work model requiring 25% in-office time. The role is based in San Francisco, Seattle, or New York City.
Anthropic's mission is to create reliable, interpretable, and steerable AI systems that are safe and beneficial. They are a quickly growing team of researchers, engineers, and other experts working on frontier AI capabilities. The company values impact-driven work to advance long-term goals of trustworthy AI.