Anthropic is seeking a Software Engineer for their Safeguards team in London, focusing on building safety and oversight mechanisms for AI systems. This role combines technical expertise with trust and safety to ensure AI systems remain beneficial and secure. The position involves developing monitoring systems, abuse detection, and safety mechanisms at scale.
The ideal candidate will have 3-8+ years of software engineering experience, particularly in integrity and abuse detection, with strong skills in Python and SQL. Experience with AI/ML systems, fraud detection, and security monitoring is highly valued. The role offers competitive compensation (£240,000 - £325,000) and benefits including flexible work arrangements and visa sponsorship.
Anthropic's mission centers on creating reliable, interpretable AI systems that benefit society. The company operates as a public benefit corporation, emphasizing collaborative research and empirical science approaches. They value diverse perspectives and encourage applications from all qualified candidates, particularly those from underrepresented groups.
Working at Anthropic means joining a cohesive team focused on large-scale research efforts, with an emphasis on impact and advancing goals of steerable, trustworthy AI. The hybrid work environment requires 25% office presence, fostering collaboration while maintaining flexibility. The company offers comprehensive benefits, including equity donation matching and generous leave policies, creating an attractive environment for those passionate about safe AI development.